[jira] [Commented] (KAFKA-16514) Kafka Streams: stream.close(CloseOptions) does not respect options.leaveGroup flag.

2024-04-29 Thread Lianet Magrans (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841986#comment-17841986
 ] 

Lianet Magrans commented on KAFKA-16514:


If my understanding here is right, this seems like a fair requirement to have 
the ability to dynamically decide if a member should leave the group, 
regardless of its dynamic/static membership. We've actually had conversations 
about this while working on the new consumer group protocol, where static 
members do send a leave group request when leaving, and they send it with an 
epoch that explicitly indicates it's a temporary leaving (-2). For now this is 
the only way static members leave (temporarily), but the ground is set if ever 
in the future we decide that we want to allow static members to send a 
definitive leave group (ex. -1 epoch, like dynamic members do). So with this 
context, and back to the streams situation here, I wonder if we would be better 
overall if we bring a less hacky solution and enable a close with richer 
semantics at the consumer level, that would allow to have a close where 
options/params could be passed to dynamically indicate how to close 
(temporarily, definitely, ...), and then align this nicely with the new 
protocol (and make it work with the legacy one). Thoughts?

> Kafka Streams: stream.close(CloseOptions) does not respect options.leaveGroup 
> flag.
> ---
>
> Key: KAFKA-16514
> URL: https://issues.apache.org/jira/browse/KAFKA-16514
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 3.7.0
>Reporter: Sal Sorrentino
>Priority: Minor
>
> Working with Kafka Streams 3.7.0, but may affect earlier versions as well.
> When attempting to shutdown a streams application and leave the associated 
> consumer group, the supplied `leaveGroup` option seems to have no effect. 
> Sample code:
> {code:java}
> CloseOptions options = new CloseOptions().leaveGroup(true);
> stream.close(options);{code}
> The expected behavior here is that the group member would shutdown and leave 
> the group, immediately triggering a consumer group rebalance. In practice, 
> the rebalance happens after the appropriate timeout configuration has expired.
> I understand the default behavior in that there is an assumption that any 
> associated StateStores would be persisted to disk and that in the case of a 
> rolling restart/deployment, the rebalance delay may be preferable. However, 
> in our application we are using in-memory state stores and standby replicas. 
> There is no benefit in delaying the rebalance in this setup and we are in 
> need of a way to force a member to leave the group when shutting down.
> The workaround we found is to set an undocumented internal StreamConfig to 
> enforce this behavior:
> {code:java}
> props.put("internal.leave.group.on.close", true);
> {code}
> To state the obvious, this is less than ideal.
> Additional configuration details:
> {code:java}
> Properties props = new Properties();
> props.put(StreamsConfig.APPLICATION_ID_CONFIG, "someApplicationId");
> props.put(
> StreamsConfig.BOOTSTRAP_SERVERS_CONFIG,
> "localhost:9092,localhost:9093,localhost:9094");
> props.put(StreamsConfig.REPLICATION_FACTOR_CONFIG, 3);
> props.put(StreamsConfig.NUM_STANDBY_REPLICAS_CONFIG, 1);
> props.put(StreamsConfig.NUM_STREAM_THREADS_CONFIG, numProcessors);
> props.put(StreamsConfig.PROCESSING_GUARANTEE_CONFIG, 
> StreamsConfig.EXACTLY_ONCE_V2);{code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16506) add the scala version of tool-related classes back to core module to follow KIP-906

2024-04-29 Thread PoAn Yang (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841978#comment-17841978
 ] 

PoAn Yang commented on KAFKA-16506:
---

Hi [~chia7712], I compare command under core/src/main/scala/kafka/admin and 
tools/src/main folders between version 2.8 and trunk. The only missing command 
is 
[PreferredReplicaLeaderElectionCommand.scala|https://github.com/apache/kafka/blob/2.8/core/src/main/scala/kafka/admin/PreferredReplicaLeaderElectionCommand.scala].
 It was deprecated in KAFKA-8405.

> add the scala version of tool-related classes back to core module to follow 
> KIP-906
> ---
>
> Key: KAFKA-16506
> URL: https://issues.apache.org/jira/browse/KAFKA-16506
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: PoAn Yang
>Priority: Major
>
> According to 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-906%3A+Tools+migration+guidelines
>  , we have to deprecate the scala version of tool-related classes instead of 
> deleting them.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16622) Mirromaker2 first Checkpoint not emitted until consumer group fully catches up once

2024-04-29 Thread Edoardo Comar (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841927#comment-17841927
 ] 

Edoardo Comar commented on KAFKA-16622:
---

related issue https://issues.apache.org/jira/browse/KAFKA-16364

> Mirromaker2 first Checkpoint not emitted until consumer group fully catches 
> up once
> ---
>
> Key: KAFKA-16622
> URL: https://issues.apache.org/jira/browse/KAFKA-16622
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 3.7.0, 3.6.2, 3.8.0
>Reporter: Edoardo Comar
>Priority: Major
> Attachments: connect.log.2024-04-26-10.zip, 
> edo-connect-mirror-maker-sourcetarget.properties
>
>
> We observed an excessively delayed emission of the MM2 Checkpoint record.
> It only gets created when the source consumer reaches the end of a topic. 
> This does not seem reasonable.
> In a very simple setup :
> Tested with a standalone single process MirrorMaker2 mirroring between two 
> single-node kafka clusters(mirromaker config attached) with quick refresh 
> intervals (eg 5 sec) and a small offset.lag.max (eg 10)
> create a single topic in the source cluster
> produce data to it (e.g. 1 records)
> start a slow consumer - e.g. fetching 50records/poll and pausing 1 sec 
> between polls which commits after each poll
> watch the Checkpoint topic in the target cluster
> bin/kafka-console-consumer.sh --bootstrap-server localhost:9192 \
>   --topic source.checkpoints.internal \
>   --formatter org.apache.kafka.connect.mirror.formatters.CheckpointFormatter \
>--from-beginning
> -> no record appears in the checkpoint topic until the consumer reaches the 
> end of the topic (ie its consumer group lag gets down to 0).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16563) migration to KRaft hanging after MigrationClientException

2024-04-29 Thread Luke Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luke Chen resolved KAFKA-16563.
---
Fix Version/s: 3.8.0
   3.7.1
   Resolution: Fixed

> migration to KRaft hanging after MigrationClientException
> -
>
> Key: KAFKA-16563
> URL: https://issues.apache.org/jira/browse/KAFKA-16563
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.7.0
>Reporter: Luke Chen
>Assignee: Luke Chen
>Priority: Major
> Fix For: 3.8.0, 3.7.1
>
>
> When running ZK migrating to KRaft process, we encountered an issue that the 
> migrating is hanging and the `ZkMigrationState` cannot move to `MIGRATION` 
> state. After investigation, the root cause is because the pollEvent didn't 
> retry with the retriable `MigrationClientException` (i.e. ZK client retriable 
> errors) while it should. And because of this, the poll event will not poll 
> anymore, which causes the KRaftMigrationDriver cannot work as expected.
>  
> {code:java}
> 2024-04-11 21:27:55,393 INFO [KRaftMigrationDriver id=5] Encountered 
> ZooKeeper error during event PollEvent. Will retry. 
> (org.apache.kafka.metadata.migration.KRaftMigrationDriver) 
> [controller-5-migration-driver-event-handler]org.apache.zookeeper.KeeperException$NodeExistsException:
>  KeeperErrorCode = NodeExists for /migrationat 
> org.apache.zookeeper.KeeperException.create(KeeperException.java:126)at 
> org.apache.zookeeper.KeeperException.create(KeeperException.java:54)at 
> kafka.zookeeper.AsyncResponse.maybeThrow(ZooKeeperClient.scala:570)at 
> kafka.zk.KafkaZkClient.createInitialMigrationState(KafkaZkClient.scala:1701)  
>   at 
> kafka.zk.KafkaZkClient.getOrCreateMigrationState(KafkaZkClient.scala:1689)
> at 
> kafka.zk.ZkMigrationClient.$anonfun$getOrCreateMigrationRecoveryState$1(ZkMigrationClient.scala:109)
> at 
> kafka.zk.ZkMigrationClient.getOrCreateMigrationRecoveryState(ZkMigrationClient.scala:69)
> at 
> org.apache.kafka.metadata.migration.KRaftMigrationDriver.applyMigrationOperation(KRaftMigrationDriver.java:248)
> at 
> org.apache.kafka.metadata.migration.KRaftMigrationDriver.recoverMigrationStateFromZK(KRaftMigrationDriver.java:169)
> at 
> org.apache.kafka.metadata.migration.KRaftMigrationDriver.access$1900(KRaftMigrationDriver.java:62)
> at 
> org.apache.kafka.metadata.migration.KRaftMigrationDriver$PollEvent.run(KRaftMigrationDriver.java:794)
> at 
> org.apache.kafka.queue.KafkaEventQueue$EventContext.run(KafkaEventQueue.java:127)
> at 
> org.apache.kafka.queue.KafkaEventQueue$EventHandler.handleEvents(KafkaEventQueue.java:210)
> at 
> org.apache.kafka.queue.KafkaEventQueue$EventHandler.run(KafkaEventQueue.java:181)
> at java.base/java.lang.Thread.run(Thread.java:840){code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-16640) Replace TestUtils#resource by scala.util.Using

2024-04-28 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai reassigned KAFKA-16640:
--

Assignee: TengYao Chi  (was: Chia-Ping Tsai)

> Replace TestUtils#resource by scala.util.Using
> --
>
> Key: KAFKA-16640
> URL: https://issues.apache.org/jira/browse/KAFKA-16640
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: TengYao Chi
>Priority: Minor
>
> `scala.util.Using` is in both scala 2.13 and scala-collection-compat so we 
> don't need to have custom try-resource function.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16640) Replace TestUtils#resource by scala.util.Using

2024-04-28 Thread TengYao Chi (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841791#comment-17841791
 ] 

TengYao Chi commented on KAFKA-16640:
-

i am able to handle this issue 

> Replace TestUtils#resource by scala.util.Using
> --
>
> Key: KAFKA-16640
> URL: https://issues.apache.org/jira/browse/KAFKA-16640
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Minor
>
> `scala.util.Using` is in both scala 2.13 and scala-collection-compat so we 
> don't need to have custom try-resource function.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16640) Replace TestUtils#resource by scala.util.Using

2024-04-28 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-16640:
--

 Summary: Replace TestUtils#resource by scala.util.Using
 Key: KAFKA-16640
 URL: https://issues.apache.org/jira/browse/KAFKA-16640
 Project: Kafka
  Issue Type: Improvement
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai


`scala.util.Using` is in both scala 2.13 and scala-collection-compat so we 
don't need to have custom try-resource function.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16639) AsyncKafkaConsumer#close does not send heartbeat to leave group

2024-04-28 Thread Philip Nee (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841678#comment-17841678
 ] 

Philip Nee commented on KAFKA-16639:


thanks for reporting. will do.

> AsyncKafkaConsumer#close does not send heartbeat to leave group
> ---
>
> Key: KAFKA-16639
> URL: https://issues.apache.org/jira/browse/KAFKA-16639
> Project: Kafka
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Major
>  Labels: kip-848-client-support
>
> This bug can be reproduced by immediately closing a consumer which is just 
> created.
> The root cause is that we skip the new heartbeat used to leave group when 
> there is a in-flight heartbeat request 
> ([https://github.com/apache/kafka/blob/5de5d967adffd864bad3ec729760a430253abf38/clients/src/main/java/org/apache/kafka/clients/consumer/internals/HeartbeatRequestManager.java#L212).]
> It seems to me the simple solution is that we create a heartbeat request when 
> meeting above situation and then send it by pollOnClose 
> ([https://github.com/apache/kafka/blob/5de5d967adffd864bad3ec729760a430253abf38/clients/src/main/java/org/apache/kafka/clients/consumer/internals/RequestManager.java#L62).]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16639) AsyncKafkaConsumer#close does not send heartbeat to leave group

2024-04-28 Thread Chia-Ping Tsai (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841675#comment-17841675
 ] 

Chia-Ping Tsai commented on KAFKA-16639:


[~pnee] Could you please take a look if you have free cycle? thanks!

> AsyncKafkaConsumer#close does not send heartbeat to leave group
> ---
>
> Key: KAFKA-16639
> URL: https://issues.apache.org/jira/browse/KAFKA-16639
> Project: Kafka
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Major
>  Labels: kip-848-client-support
>
> This bug can be reproduced by immediately closing a consumer which is just 
> created.
> The root cause is that we skip the new heartbeat used to leave group when 
> there is a in-flight heartbeat request 
> ([https://github.com/apache/kafka/blob/5de5d967adffd864bad3ec729760a430253abf38/clients/src/main/java/org/apache/kafka/clients/consumer/internals/HeartbeatRequestManager.java#L212).]
> It seems to me the simple solution is that we create a heartbeat request when 
> meeting above situation and then send it by pollOnClose 
> ([https://github.com/apache/kafka/blob/5de5d967adffd864bad3ec729760a430253abf38/clients/src/main/java/org/apache/kafka/clients/consumer/internals/RequestManager.java#L62).]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16639) AsyncKafkaConsumer#close does not send heartbeat to leave group

2024-04-28 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-16639:
--

 Summary: AsyncKafkaConsumer#close does not send heartbeat to leave 
group
 Key: KAFKA-16639
 URL: https://issues.apache.org/jira/browse/KAFKA-16639
 Project: Kafka
  Issue Type: Bug
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai


This bug can be reproduced by immediately closing a consumer which is just 
created.

The root cause is that we skip the new heartbeat used to leave group when there 
is a in-flight heartbeat request 
([https://github.com/apache/kafka/blob/5de5d967adffd864bad3ec729760a430253abf38/clients/src/main/java/org/apache/kafka/clients/consumer/internals/HeartbeatRequestManager.java#L212).]

It seems to me the simple solution is that we create a heartbeat request when 
meeting above situation and then send it by pollOnClose 
([https://github.com/apache/kafka/blob/5de5d967adffd864bad3ec729760a430253abf38/clients/src/main/java/org/apache/kafka/clients/consumer/internals/RequestManager.java#L62).]

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16514) Kafka Streams: stream.close(CloseOptions) does not respect options.leaveGroup flag.

2024-04-28 Thread Sal Sorrentino (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841670#comment-17841670
 ] 

Sal Sorrentino commented on KAFKA-16514:


Well, this is not entirely accurate. As mentioned in one of my first comments, 
by providing a `group.instance.id`, you can achieve the desired behavior 
without using any "internal" configs. However, I agree that it does seem broken 
that the only way to voluntarily leave a group is to be static member.

> Kafka Streams: stream.close(CloseOptions) does not respect options.leaveGroup 
> flag.
> ---
>
> Key: KAFKA-16514
> URL: https://issues.apache.org/jira/browse/KAFKA-16514
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 3.7.0
>Reporter: Sal Sorrentino
>Priority: Minor
>
> Working with Kafka Streams 3.7.0, but may affect earlier versions as well.
> When attempting to shutdown a streams application and leave the associated 
> consumer group, the supplied `leaveGroup` option seems to have no effect. 
> Sample code:
> {code:java}
> CloseOptions options = new CloseOptions().leaveGroup(true);
> stream.close(options);{code}
> The expected behavior here is that the group member would shutdown and leave 
> the group, immediately triggering a consumer group rebalance. In practice, 
> the rebalance happens after the appropriate timeout configuration has expired.
> I understand the default behavior in that there is an assumption that any 
> associated StateStores would be persisted to disk and that in the case of a 
> rolling restart/deployment, the rebalance delay may be preferable. However, 
> in our application we are using in-memory state stores and standby replicas. 
> There is no benefit in delaying the rebalance in this setup and we are in 
> need of a way to force a member to leave the group when shutting down.
> The workaround we found is to set an undocumented internal StreamConfig to 
> enforce this behavior:
> {code:java}
> props.put("internal.leave.group.on.close", true);
> {code}
> To state the obvious, this is less than ideal.
> Additional configuration details:
> {code:java}
> Properties props = new Properties();
> props.put(StreamsConfig.APPLICATION_ID_CONFIG, "someApplicationId");
> props.put(
> StreamsConfig.BOOTSTRAP_SERVERS_CONFIG,
> "localhost:9092,localhost:9093,localhost:9094");
> props.put(StreamsConfig.REPLICATION_FACTOR_CONFIG, 3);
> props.put(StreamsConfig.NUM_STANDBY_REPLICAS_CONFIG, 1);
> props.put(StreamsConfig.NUM_STREAM_THREADS_CONFIG, numProcessors);
> props.put(StreamsConfig.PROCESSING_GUARANTEE_CONFIG, 
> StreamsConfig.EXACTLY_ONCE_V2);{code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16638) Align the naming convention for config and default variables in *Config classes

2024-04-28 Thread Omnia Ibrahim (Jira)
Omnia Ibrahim created KAFKA-16638:
-

 Summary: Align the naming convention for config and default 
variables in *Config classes
 Key: KAFKA-16638
 URL: https://issues.apache.org/jira/browse/KAFKA-16638
 Project: Kafka
  Issue Type: Task
Reporter: Omnia Ibrahim


Some classes in the code is violating the naming naming convention for config, 
doc, and default variables which is:
 * `_CONFIG` suffix for defining the configuration
 * `_DEFAULT` suffix for default value 
 * `_DOC` suffix for doc

The following classes need to be updated 
 * `CleanerConfig` and `RemoteLogManagerConfig` to use  `_CONFIG` suffix 
instead of `_PROP`.
 * Others like `LogConfig` and `QuorumConfig` to use `_DEFAULT` suffix instead 
of `DEFAULT_` prefix .
 * same goes with `CommonClientConfigs`, `StreamsConfig` however these are 
public interfaces and will need a KIP to rename the default value variables and 
mark the old one as deprecated. This might need to be broken to different Jira.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16637) KIP-848 does not work well

2024-04-28 Thread sanghyeok An (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sanghyeok An updated KAFKA-16637:
-
Description: 
I want to test next generation of the consumer rebalance protocol  
([https://cwiki.apache.org/confluence/display/KAFKA/The+Next+Generation+of+the+Consumer+Rebalance+Protocol+%28KIP-848%29+-+Early+Access+Release+Notes)]

 

However, it does not works well. 

You can check my condition.

 

*Docker-compose.yaml*

[https://github.com/chickenchickenlove/kraft-test/blob/main/docker-compose/docker-compose.yaml]

 

*Consumer Code*

[https://github.com/chickenchickenlove/kraft-test/blob/main/src/main/java/org/example/Main.java]

 

*Consumer logs*

[main] INFO org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector - 
initializing Kafka metrics collector
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.7.0
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: 
2ae524ed625438c5
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 
1714309299215
[main] INFO org.apache.kafka.clients.consumer.internals.AsyncKafkaConsumer - 
[Consumer clientId=1-1, groupId=1] Subscribed to topic(s): test-topic1
[consumer_background_thread] INFO org.apache.kafka.clients.Metadata - [Consumer 
clientId=1-1, groupId=1] Cluster ID: Some(MkU3OEVBNTcwNTJENDM2Qk)

Stuck In here...

 

*Broker logs* 

broker    | [2024-04-28 12:42:27,751] INFO Sent auto-creation request for 
Set(__consumer_offsets) to the active controller. 
(kafka.server.DefaultAutoTopicCreationManager)
broker    | [2024-04-28 12:42:27,801] INFO Sent auto-creation request for 
Set(__consumer_offsets) to the active controller. 
(kafka.server.DefaultAutoTopicCreationManager)
broker    | [2024-04-28 12:42:28,211] INFO Sent auto-creation request for 
Set(__consumer_offsets) to the active controller. 
(kafka.server.DefaultAutoTopicCreationManager)
broker    | [2024-04-28 12:42:28,259] INFO Sent auto-creation request for 
Set(__consumer_offsets) to the active controller. 
(kafka.server.DefaultAutoTopicCreationManager)
broker    | [2024-04-28 12:42:28,727] INFO Sent auto-creation request for 
Set(__consumer_offsets) to the active controller. 
(kafka.server.DefaultAutoTopicCreationManager)

stuck in here

  was:
I want to test next generation of the consumer rebalance protocol  
([https://cwiki.apache.org/confluence/display/KAFKA/The+Next+Generation+of+the+Consumer+Rebalance+Protocol+%28KIP-848%29+-+Early+Access+Release+Notes)]

 

However, it does not works well. 

You can check my condition.

 

### Docker-compose.yaml

[https://github.com/chickenchickenlove/kraft-test/blob/main/docker-compose/docker-compose.yaml]

 

### Consumer Code

[https://github.com/chickenchickenlove/kraft-test/blob/main/src/main/java/org/example/Main.java]

 

### Consumer logs

[main] INFO org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector - 
initializing Kafka metrics collector
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.7.0
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: 
2ae524ed625438c5
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 
1714309299215
[main] INFO org.apache.kafka.clients.consumer.internals.AsyncKafkaConsumer - 
[Consumer clientId=1-1, groupId=1] Subscribed to topic(s): test-topic1
[consumer_background_thread] INFO org.apache.kafka.clients.Metadata - [Consumer 
clientId=1-1, groupId=1] Cluster ID: Some(MkU3OEVBNTcwNTJENDM2Qk)

Stuck In here...

 

### Broker logs 

broker    | [2024-04-28 12:42:27,751] INFO Sent auto-creation request for 
Set(__consumer_offsets) to the active controller. 
(kafka.server.DefaultAutoTopicCreationManager)
broker    | [2024-04-28 12:42:27,801] INFO Sent auto-creation request for 
Set(__consumer_offsets) to the active controller. 
(kafka.server.DefaultAutoTopicCreationManager)
broker    | [2024-04-28 12:42:28,211] INFO Sent auto-creation request for 
Set(__consumer_offsets) to the active controller. 
(kafka.server.DefaultAutoTopicCreationManager)
broker    | [2024-04-28 12:42:28,259] INFO Sent auto-creation request for 
Set(__consumer_offsets) to the active controller. 
(kafka.server.DefaultAutoTopicCreationManager)
broker    | [2024-04-28 12:42:28,727] INFO Sent auto-creation request for 
Set(__consumer_offsets) to the active controller. 
(kafka.server.DefaultAutoTopicCreationManager)

stuck in here


> KIP-848 does not work well
> --
>
> Key: KAFKA-16637
> URL: https://issues.apache.org/jira/browse/KAFKA-16637
> Project: Kafka
>  Issue Type: Bug
>Reporter: sanghyeok An
>Priority: Minor
>
> I want to test next generation of the consumer rebalance protocol  
> ([https://cwiki.apache.org/confluence/display/KAFKA/The+Next+Generation+of+th

[jira] [Updated] (KAFKA-16637) KIP-848 does not work well

2024-04-28 Thread sanghyeok An (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sanghyeok An updated KAFKA-16637:
-
Description: 
I want to test next generation of the consumer rebalance protocol  
([https://cwiki.apache.org/confluence/display/KAFKA/The+Next+Generation+of+the+Consumer+Rebalance+Protocol+%28KIP-848%29+-+Early+Access+Release+Notes)]

 

However, it does not works well. 

You can check my condition.

 

### Docker-compose.yaml

[https://github.com/chickenchickenlove/kraft-test/blob/main/docker-compose/docker-compose.yaml]

 

### Consumer Code

[https://github.com/chickenchickenlove/kraft-test/blob/main/src/main/java/org/example/Main.java]

 

### Consumer logs

[main] INFO org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector - 
initializing Kafka metrics collector
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.7.0
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: 
2ae524ed625438c5
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 
1714309299215
[main] INFO org.apache.kafka.clients.consumer.internals.AsyncKafkaConsumer - 
[Consumer clientId=1-1, groupId=1] Subscribed to topic(s): test-topic1
[consumer_background_thread] INFO org.apache.kafka.clients.Metadata - [Consumer 
clientId=1-1, groupId=1] Cluster ID: Some(MkU3OEVBNTcwNTJENDM2Qk)

Stuck In here...

 

### Broker logs 

broker    | [2024-04-28 12:42:27,751] INFO Sent auto-creation request for 
Set(__consumer_offsets) to the active controller. 
(kafka.server.DefaultAutoTopicCreationManager)
broker    | [2024-04-28 12:42:27,801] INFO Sent auto-creation request for 
Set(__consumer_offsets) to the active controller. 
(kafka.server.DefaultAutoTopicCreationManager)
broker    | [2024-04-28 12:42:28,211] INFO Sent auto-creation request for 
Set(__consumer_offsets) to the active controller. 
(kafka.server.DefaultAutoTopicCreationManager)
broker    | [2024-04-28 12:42:28,259] INFO Sent auto-creation request for 
Set(__consumer_offsets) to the active controller. 
(kafka.server.DefaultAutoTopicCreationManager)
broker    | [2024-04-28 12:42:28,727] INFO Sent auto-creation request for 
Set(__consumer_offsets) to the active controller. 
(kafka.server.DefaultAutoTopicCreationManager)

stuck in here

> KIP-848 does not work well
> --
>
> Key: KAFKA-16637
> URL: https://issues.apache.org/jira/browse/KAFKA-16637
> Project: Kafka
>  Issue Type: Bug
>Reporter: sanghyeok An
>Priority: Minor
>
> I want to test next generation of the consumer rebalance protocol  
> ([https://cwiki.apache.org/confluence/display/KAFKA/The+Next+Generation+of+the+Consumer+Rebalance+Protocol+%28KIP-848%29+-+Early+Access+Release+Notes)]
>  
> However, it does not works well. 
> You can check my condition.
>  
> ### Docker-compose.yaml
> [https://github.com/chickenchickenlove/kraft-test/blob/main/docker-compose/docker-compose.yaml]
>  
> ### Consumer Code
> [https://github.com/chickenchickenlove/kraft-test/blob/main/src/main/java/org/example/Main.java]
>  
> ### Consumer logs
> [main] INFO org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector 
> - initializing Kafka metrics collector
> [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.7.0
> [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: 
> 2ae524ed625438c5
> [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 
> 1714309299215
> [main] INFO org.apache.kafka.clients.consumer.internals.AsyncKafkaConsumer - 
> [Consumer clientId=1-1, groupId=1] Subscribed to topic(s): test-topic1
> [consumer_background_thread] INFO org.apache.kafka.clients.Metadata - 
> [Consumer clientId=1-1, groupId=1] Cluster ID: Some(MkU3OEVBNTcwNTJENDM2Qk)
> Stuck In here...
>  
> ### Broker logs 
> broker    | [2024-04-28 12:42:27,751] INFO Sent auto-creation request for 
> Set(__consumer_offsets) to the active controller. 
> (kafka.server.DefaultAutoTopicCreationManager)
> broker    | [2024-04-28 12:42:27,801] INFO Sent auto-creation request for 
> Set(__consumer_offsets) to the active controller. 
> (kafka.server.DefaultAutoTopicCreationManager)
> broker    | [2024-04-28 12:42:28,211] INFO Sent auto-creation request for 
> Set(__consumer_offsets) to the active controller. 
> (kafka.server.DefaultAutoTopicCreationManager)
> broker    | [2024-04-28 12:42:28,259] INFO Sent auto-creation request for 
> Set(__consumer_offsets) to the active controller. 
> (kafka.server.DefaultAutoTopicCreationManager)
> broker    | [2024-04-28 12:42:28,727] INFO Sent auto-creation request for 
> Set(__consumer_offsets) to the active controller. 
> (kafka.server.DefaultAutoTopicCreationManager)
> stuck in here



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16637) KIP-848 does not work well

2024-04-28 Thread sanghyeok An (Jira)
sanghyeok An created KAFKA-16637:


 Summary: KIP-848 does not work well
 Key: KAFKA-16637
 URL: https://issues.apache.org/jira/browse/KAFKA-16637
 Project: Kafka
  Issue Type: Bug
Reporter: sanghyeok An






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16637) KIP-848 does not work well

2024-04-28 Thread sanghyeok An (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sanghyeok An updated KAFKA-16637:
-
Priority: Minor  (was: Major)

> KIP-848 does not work well
> --
>
> Key: KAFKA-16637
> URL: https://issues.apache.org/jira/browse/KAFKA-16637
> Project: Kafka
>  Issue Type: Bug
>Reporter: sanghyeok An
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-15156) Update cipherInformation correctly in DefaultChannelMetadataRegistry

2024-04-28 Thread Walter Hernandez (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Hernandez reassigned KAFKA-15156:


Assignee: (was: Walter Hernandez)

> Update cipherInformation correctly in DefaultChannelMetadataRegistry
> 
>
> Key: KAFKA-15156
> URL: https://issues.apache.org/jira/browse/KAFKA-15156
> Project: Kafka
>  Issue Type: Bug
>Reporter: Divij Vaidya
>Priority: Minor
>  Labels: newbie
>
> At 
> [https://github.com/apache/kafka/blob/4a61b48d3dca81e28b57f10af6052f36c50a05e3/clients/src/main/java/org/apache/kafka/common/network/DefaultChannelMetadataRegistry.java#L25]
>  
> we do not end up assigning the new value of cipherInformation to the member 
> variable. 
> The code over here, should be the following so that we can update the cipher 
> information.
> {noformat}
> if (cipherInformation == null) {     
> throw Illegal exception.
> }
> this.cipherInformation = cipherInformation{noformat}
>  
>  
> While this class is only used in tests, we should still fix this. It's a 
> minor bug.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-15897) Flaky Test: testWrongIncarnationId() – kafka.server.ControllerRegistrationManagerTest

2024-04-28 Thread Chia-Ping Tsai (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841597#comment-17841597
 ] 

Chia-Ping Tsai commented on KAFKA-15897:


It seems the root cause is that `context.mockChannelManager.poll()` 
([https://github.com/apache/kafka/blob/trunk/core/src/test/scala/unit/kafka/server/ControllerRegistrationManagerTest.scala#L221)]
  is executed after event queue thread adds the `ControllerRegistrationRequest` 
([https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/server/ControllerRegistrationManager.scala#L233).]
 The request is consumed without no response as we don't set response 
([https://github.com/apache/kafka/blob/trunk/core/src/test/scala/unit/kafka/server/ControllerRegistrationManagerTest.scala#L226)].
 Hence, the following test 
([https://github.com/apache/kafka/blob/trunk/core/src/test/scala/unit/kafka/server/ControllerRegistrationManagerTest.scala#L230)]
 gets failed as the request is gone.

I feel the first poll 
([https://github.com/apache/kafka/blob/trunk/core/src/test/scala/unit/kafka/server/ControllerRegistrationManagerTest.scala#L221)]
 is unnecessary since the check is waiting for event queue thread to send 
controller registration request. That does NOT need the poll, and removing the 
poll can prevent above race condition.

 

 

> Flaky Test: testWrongIncarnationId() – 
> kafka.server.ControllerRegistrationManagerTest
> -
>
> Key: KAFKA-15897
> URL: https://issues.apache.org/jira/browse/KAFKA-15897
> Project: Kafka
>  Issue Type: Test
>Reporter: Apoorv Mittal
>Assignee: Chia-Ping Tsai
>Priority: Major
>  Labels: flaky-test
>
> Build run: 
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-14699/21/tests/
>  
> {code:java}
> org.opentest4j.AssertionFailedError: expected: <(false,1,0)> but was: 
> <(true,0,0)>Stacktraceorg.opentest4j.AssertionFailedError: expected: 
> <(false,1,0)> but was: <(true,0,0)>at 
> app//org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
>at 
> app//org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
>at 
> app//org.junit.jupiter.api.AssertEquals.failNotEqual(AssertEquals.java:197)  
> at 
> app//org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:182)  
> at 
> app//org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:177)  
> at app//org.junit.jupiter.api.Assertions.assertEquals(Assertions.java:1141)   
>   at 
> app//kafka.server.ControllerRegistrationManagerTest.$anonfun$testWrongIncarnationId$3(ControllerRegistrationManagerTest.scala:228)
>at 
> app//org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:379)
>  at 
> app//kafka.server.ControllerRegistrationManagerTest.testWrongIncarnationId(ControllerRegistrationManagerTest.scala:226)
>   at 
> java.base@17.0.7/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)at 
> java.base@17.0.7/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
>   at 
> java.base@17.0.7/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base@17.0.7/java.lang.reflect.Method.invoke(Method.java:568)
> at 
> app//org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:728)
>   at 
> app//org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
>at 
> app//org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
>  at 
> app//org.junit.jupiter.engine.extension.SameThreadTimeoutInvocation.proceed(SameThreadTimeoutInvocation.java:45)
>  at 
> app//org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:156)
> at 
> app//org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:147)
>   at 
> app//org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:86)
>at 
> app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(InterceptingExecutableInvoker.java:103)
> at 
> app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.lambda$invoke$0(InterceptingExecutableInvoker.java:93)
>  at 
> app//org.junit.jupiter.engine.execution.InvocationInterceptorChain

[jira] [Assigned] (KAFKA-15897) Flaky Test: testWrongIncarnationId() – kafka.server.ControllerRegistrationManagerTest

2024-04-27 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai reassigned KAFKA-15897:
--

Assignee: Chia-Ping Tsai

> Flaky Test: testWrongIncarnationId() – 
> kafka.server.ControllerRegistrationManagerTest
> -
>
> Key: KAFKA-15897
> URL: https://issues.apache.org/jira/browse/KAFKA-15897
> Project: Kafka
>  Issue Type: Test
>Reporter: Apoorv Mittal
>Assignee: Chia-Ping Tsai
>Priority: Major
>  Labels: flaky-test
>
> Build run: 
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-14699/21/tests/
>  
> {code:java}
> org.opentest4j.AssertionFailedError: expected: <(false,1,0)> but was: 
> <(true,0,0)>Stacktraceorg.opentest4j.AssertionFailedError: expected: 
> <(false,1,0)> but was: <(true,0,0)>at 
> app//org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
>at 
> app//org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
>at 
> app//org.junit.jupiter.api.AssertEquals.failNotEqual(AssertEquals.java:197)  
> at 
> app//org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:182)  
> at 
> app//org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:177)  
> at app//org.junit.jupiter.api.Assertions.assertEquals(Assertions.java:1141)   
>   at 
> app//kafka.server.ControllerRegistrationManagerTest.$anonfun$testWrongIncarnationId$3(ControllerRegistrationManagerTest.scala:228)
>at 
> app//org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:379)
>  at 
> app//kafka.server.ControllerRegistrationManagerTest.testWrongIncarnationId(ControllerRegistrationManagerTest.scala:226)
>   at 
> java.base@17.0.7/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)at 
> java.base@17.0.7/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
>   at 
> java.base@17.0.7/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base@17.0.7/java.lang.reflect.Method.invoke(Method.java:568)
> at 
> app//org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:728)
>   at 
> app//org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
>at 
> app//org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
>  at 
> app//org.junit.jupiter.engine.extension.SameThreadTimeoutInvocation.proceed(SameThreadTimeoutInvocation.java:45)
>  at 
> app//org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:156)
> at 
> app//org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:147)
>   at 
> app//org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:86)
>at 
> app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(InterceptingExecutableInvoker.java:103)
> at 
> app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.lambda$invoke$0(InterceptingExecutableInvoker.java:93)
>  at 
> app//org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
> at 
> app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
>at 
> app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
> at 
> app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
> at 
> app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:92)
>   at 
> app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:86)
>   at 
> app//org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:218)
>at 
> app//org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
>at 
> app//org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java

[jira] [Commented] (KAFKA-16092) Queues for Kafka

2024-04-27 Thread zhangzhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841566#comment-17841566
 ] 

zhangzhisheng commented on KAFKA-16092:
---

this new feature is good 

> Queues for Kafka
> 
>
> Key: KAFKA-16092
> URL: https://issues.apache.org/jira/browse/KAFKA-16092
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Andrew Schofield
>Assignee: Andrew Schofield
>Priority: Major
>  Labels: queues-for-kafka
> Attachments: image-2024-04-28-11-05-56-153.png
>
>
> This Jira tracks the development of KIP-932: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-932%3A+Queues+for+Kafka



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (KAFKA-16092) Queues for Kafka

2024-04-27 Thread zhangzhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841566#comment-17841566
 ] 

zhangzhisheng edited comment on KAFKA-16092 at 4/28/24 3:06 AM:


this new feature is good 


was (Author: zhangzs):
this new feature is good 

> Queues for Kafka
> 
>
> Key: KAFKA-16092
> URL: https://issues.apache.org/jira/browse/KAFKA-16092
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Andrew Schofield
>Assignee: Andrew Schofield
>Priority: Major
>  Labels: queues-for-kafka
> Attachments: image-2024-04-28-11-05-56-153.png
>
>
> This Jira tracks the development of KIP-932: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-932%3A+Queues+for+Kafka



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16092) Queues for Kafka

2024-04-27 Thread zhangzhisheng (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangzhisheng updated KAFKA-16092:
--
Attachment: image-2024-04-28-11-05-56-153.png

> Queues for Kafka
> 
>
> Key: KAFKA-16092
> URL: https://issues.apache.org/jira/browse/KAFKA-16092
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Andrew Schofield
>Assignee: Andrew Schofield
>Priority: Major
>  Labels: queues-for-kafka
> Attachments: image-2024-04-28-11-05-56-153.png
>
>
> This Jira tracks the development of KIP-932: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-932%3A+Queues+for+Kafka



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16560) Refactor/cleanup BrokerNode/ControllerNode/ClusterConfig

2024-04-27 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-16560.

Fix Version/s: 3.8.0
   Resolution: Fixed

> Refactor/cleanup BrokerNode/ControllerNode/ClusterConfig
> 
>
> Key: KAFKA-16560
> URL: https://issues.apache.org/jira/browse/KAFKA-16560
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Kuan Po Tseng
>Priority: Minor
> Fix For: 3.8.0
>
>
> origin discussion: 
> https://github.com/apache/kafka/pull/15715#discussion_r1564660916
> It seems to me this jira should address following tasks.
> 1. make them immutable. We have adopted the builder pattern, so all changes 
> should be completed in the builder phase
> 2. make all `Builder#build()` not accept any arguments. Instead, we should 
> add new setters for those arguments.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15897) Flaky Test: testWrongIncarnationId() – kafka.server.ControllerRegistrationManagerTest

2024-04-27 Thread Igor Soarez (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Soarez updated KAFKA-15897:

Issue Type: Test  (was: Bug)

> Flaky Test: testWrongIncarnationId() – 
> kafka.server.ControllerRegistrationManagerTest
> -
>
> Key: KAFKA-15897
> URL: https://issues.apache.org/jira/browse/KAFKA-15897
> Project: Kafka
>  Issue Type: Test
>Reporter: Apoorv Mittal
>Priority: Major
>  Labels: flaky-test
>
> Build run: 
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-14699/21/tests/
>  
> {code:java}
> org.opentest4j.AssertionFailedError: expected: <(false,1,0)> but was: 
> <(true,0,0)>Stacktraceorg.opentest4j.AssertionFailedError: expected: 
> <(false,1,0)> but was: <(true,0,0)>at 
> app//org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
>at 
> app//org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
>at 
> app//org.junit.jupiter.api.AssertEquals.failNotEqual(AssertEquals.java:197)  
> at 
> app//org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:182)  
> at 
> app//org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:177)  
> at app//org.junit.jupiter.api.Assertions.assertEquals(Assertions.java:1141)   
>   at 
> app//kafka.server.ControllerRegistrationManagerTest.$anonfun$testWrongIncarnationId$3(ControllerRegistrationManagerTest.scala:228)
>at 
> app//org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:379)
>  at 
> app//kafka.server.ControllerRegistrationManagerTest.testWrongIncarnationId(ControllerRegistrationManagerTest.scala:226)
>   at 
> java.base@17.0.7/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)at 
> java.base@17.0.7/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
>   at 
> java.base@17.0.7/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base@17.0.7/java.lang.reflect.Method.invoke(Method.java:568)
> at 
> app//org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:728)
>   at 
> app//org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
>at 
> app//org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
>  at 
> app//org.junit.jupiter.engine.extension.SameThreadTimeoutInvocation.proceed(SameThreadTimeoutInvocation.java:45)
>  at 
> app//org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:156)
> at 
> app//org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:147)
>   at 
> app//org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:86)
>at 
> app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(InterceptingExecutableInvoker.java:103)
> at 
> app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.lambda$invoke$0(InterceptingExecutableInvoker.java:93)
>  at 
> app//org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
> at 
> app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
>at 
> app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
> at 
> app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
> at 
> app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:92)
>   at 
> app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:86)
>   at 
> app//org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:218)
>at 
> app//org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
>at 
> app//org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:214)
> at 
> app//org.junit.jupiter.engine.descriptor.TestMethodTestDescripto

[jira] [Created] (KAFKA-16636) Flaky test - testStickyTaskAssignorLargePartitionCount – org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest

2024-04-27 Thread Igor Soarez (Jira)
Igor Soarez created KAFKA-16636:
---

 Summary: Flaky test - testStickyTaskAssignorLargePartitionCount – 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest
 Key: KAFKA-16636
 URL: https://issues.apache.org/jira/browse/KAFKA-16636
 Project: Kafka
  Issue Type: Test
Reporter: Igor Soarez
 Attachments: log (1).txt

testStickyTaskAssignorLargePartitionCount – 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest
{code:java}
java.lang.AssertionError: The first assignment took too long to complete at 
131680ms.   at 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest.completeLargeAssignment(StreamsAssignmentScaleTest.java:220)
 at 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest.testStickyTaskAssignorLargePartitionCount(StreamsAssignmentScaleTest.java:102)
 {code}
[https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15816/1/tests/]

 

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15292) Flaky test IdentityReplicationIntegrationTest#testReplicateSourceDefault()

2024-04-27 Thread Igor Soarez (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Soarez updated KAFKA-15292:

Issue Type: Test  (was: Bug)

> Flaky test IdentityReplicationIntegrationTest#testReplicateSourceDefault()
> --
>
> Key: KAFKA-15292
> URL: https://issues.apache.org/jira/browse/KAFKA-15292
> Project: Kafka
>  Issue Type: Test
>  Components: mirrormaker
>Reporter: Kirk True
>Priority: Major
>  Labels: flaky-test, mirror-maker, mirrormaker
>
> The test testReplicateSourceDefault in 
> `org.apache.kafka.connect.mirror.integration.IdentityReplicationIntegrationTest
>  is flaky about 2% of the time as shown in [Gradle 
> Enterprise|[https://ge.apache.org/scans/tests?search.relativeStartTime=P90D=kafka=America/Los_Angeles=org.apache.kafka.connect.mirror.integration.IdentityReplicationIntegrationTest=FLAKY]].
>  
> {code:java}
> java.lang.RuntimeException: Could not stop worker
> at 
> o.a.k.connect.util.clusters.EmbeddedConnectCluster.stopWorker(EmbeddedConnectCluster.java:230)
> at java.lang.Iterable.forEach(Iterable.java:75)
> at 
> o.a.k.connect.util.clusters.EmbeddedConnectCluster.stop(EmbeddedConnectCluster.java:163)
> at 
> o.a.k.connect.mirror.integration.MirrorConnectorsIntegrationBaseTest.shutdownClusters(MirrorConnectorsIntegrationBaseTest.java:267)
> at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
> at 
> jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:568)
> at 
> org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:727)
> ...
> at 
> worker.org.gradle.process.internal.worker.GradleWorkerMain.main(GradleWorkerMain.java:74)
> Caused by: java.lang.IllegalStateException: !STOPPED
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.destroy(HandlerWrapper.java:140)
> at o.a.k.connect.runtime.rest.RestServer.stop(RestServer.java:361)
> at o.a.k.connect.runtime.Connect.stop(Connect.java:69) at 
> o.a.k.connect.util.clusters.WorkerHandle.stop(WorkerHandle.java:57)
> at 
> o.a.k.connect.util.clusters.EmbeddedConnectCluster.stopWorker(EmbeddedConnectCluster.java:225)
> ... 93 more{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16635) Flaky test "shouldThrottleOldSegments(String).quorum=kraft" – kafka.server.ReplicationQuotasTest

2024-04-27 Thread Igor Soarez (Jira)
Igor Soarez created KAFKA-16635:
---

 Summary: Flaky test 
"shouldThrottleOldSegments(String).quorum=kraft" – 
kafka.server.ReplicationQuotasTest
 Key: KAFKA-16635
 URL: https://issues.apache.org/jira/browse/KAFKA-16635
 Project: Kafka
  Issue Type: Test
Reporter: Igor Soarez


"shouldThrottleOldSegments(String).quorum=kraft" – 
kafka.server.ReplicationQuotasTest
{code:java}
org.opentest4j.AssertionFailedError: Throttled replication of 2203ms should be 
> 3600.0ms ==> expected:  but was:  at 
app//org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
   at 
app//org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
   at app//org.junit.jupiter.api.AssertTrue.failNotTrue(AssertTrue.java:63) 
   at app//org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:36) at 
app//org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:214)at 
app//kafka.server.ReplicationQuotasTest.shouldThrottleOldSegments(ReplicationQuotasTest.scala:260)
 {code}
https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15816/1/tests/



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15960) Flaky test: testQuotaOverrideDelete(String).quorum=kraft – kafka.api.ClientIdQuotaTest

2024-04-27 Thread Igor Soarez (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Soarez updated KAFKA-15960:

Labels: flaky-test  (was: )

> Flaky test: testQuotaOverrideDelete(String).quorum=kraft – 
> kafka.api.ClientIdQuotaTest
> --
>
> Key: KAFKA-15960
> URL: https://issues.apache.org/jira/browse/KAFKA-15960
> Project: Kafka
>  Issue Type: Test
>Reporter: Apoorv Mittal
>Priority: Major
>  Labels: flaky-test
>
> PR build: 
> [https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-14767/15/tests/]
>  
> {code:java}
> rg.opentest4j.AssertionFailedError: Client with 
> id=QuotasTestProducer-!@#$%^&*() should have been throttled, 0.0 ==> 
> expected:  but was: 
> Stacktraceorg.opentest4j.AssertionFailedError: Client with 
> id=QuotasTestProducer-!@#$%^&*() should have been throttled, 0.0 ==> 
> expected:  but was:at 
> app//org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
>at 
> app//org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
>at app//org.junit.jupiter.api.AssertTrue.failNotTrue(AssertTrue.java:63)   
>  at app//org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:36) 
> at app//org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:210)  
>   at 
> app//kafka.api.QuotaTestClients.verifyThrottleTimeRequestChannelMetric(BaseQuotaTest.scala:260)
>   at 
> app//kafka.api.QuotaTestClients.verifyProduceThrottle(BaseQuotaTest.scala:269)
>at 
> app//kafka.api.BaseQuotaTest.testQuotaOverrideDelete(BaseQuotaTest.scala:157) 
>at 
> java.base@17.0.7/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)at 
> java.base@17.0.7/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
>   at 
> java.base@17.0.7/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15960) Flaky test: testQuotaOverrideDelete(String).quorum=kraft – kafka.api.ClientIdQuotaTest

2024-04-27 Thread Igor Soarez (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Soarez updated KAFKA-15960:

Issue Type: Test  (was: Bug)

> Flaky test: testQuotaOverrideDelete(String).quorum=kraft – 
> kafka.api.ClientIdQuotaTest
> --
>
> Key: KAFKA-15960
> URL: https://issues.apache.org/jira/browse/KAFKA-15960
> Project: Kafka
>  Issue Type: Test
>Reporter: Apoorv Mittal
>Priority: Major
>
> PR build: 
> [https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-14767/15/tests/]
>  
> {code:java}
> rg.opentest4j.AssertionFailedError: Client with 
> id=QuotasTestProducer-!@#$%^&*() should have been throttled, 0.0 ==> 
> expected:  but was: 
> Stacktraceorg.opentest4j.AssertionFailedError: Client with 
> id=QuotasTestProducer-!@#$%^&*() should have been throttled, 0.0 ==> 
> expected:  but was:at 
> app//org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
>at 
> app//org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
>at app//org.junit.jupiter.api.AssertTrue.failNotTrue(AssertTrue.java:63)   
>  at app//org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:36) 
> at app//org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:210)  
>   at 
> app//kafka.api.QuotaTestClients.verifyThrottleTimeRequestChannelMetric(BaseQuotaTest.scala:260)
>   at 
> app//kafka.api.QuotaTestClients.verifyProduceThrottle(BaseQuotaTest.scala:269)
>at 
> app//kafka.api.BaseQuotaTest.testQuotaOverrideDelete(BaseQuotaTest.scala:157) 
>at 
> java.base@17.0.7/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)at 
> java.base@17.0.7/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
>   at 
> java.base@17.0.7/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16383) fix flaky test IdentityReplicationIntegrationTest.testReplicateFromLatest()

2024-04-27 Thread Igor Soarez (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Soarez updated KAFKA-16383:

Issue Type: Test  (was: Bug)

> fix flaky test IdentityReplicationIntegrationTest.testReplicateFromLatest()
> ---
>
> Key: KAFKA-16383
> URL: https://issues.apache.org/jira/browse/KAFKA-16383
> Project: Kafka
>  Issue Type: Test
>Reporter: Johnny Hsu
>Assignee: Johnny Hsu
>Priority: Major
>
> Build link: 
> [https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-15463/4/testReport/junit/org.apache.kafka.connect.mirror.integration/IdentityReplicationIntegrationTest/Build___JDK_11_and_Scala_2_13___testReplicateFromLatest__/]
>  
> This test failed in build in several PR, which is flaky



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16174) Flaky test: testDescribeQuorumStatusSuccessful – org.apache.kafka.tools.MetadataQuorumCommandTest

2024-04-27 Thread Igor Soarez (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Soarez updated KAFKA-16174:

Issue Type: Test  (was: Bug)

> Flaky test: testDescribeQuorumStatusSuccessful – 
> org.apache.kafka.tools.MetadataQuorumCommandTest
> -
>
> Key: KAFKA-16174
> URL: https://issues.apache.org/jira/browse/KAFKA-16174
> Project: Kafka
>  Issue Type: Test
>Reporter: Apoorv Mittal
>Priority: Major
>
> [https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15190/3/tests/]
>  
> {code:java}
> Errorjava.util.concurrent.ExecutionException: java.lang.RuntimeException: 
> Received a fatal error while waiting for the controller to acknowledge that 
> we are caught upStacktracejava.util.concurrent.ExecutionException: 
> java.lang.RuntimeException: Received a fatal error while waiting for the 
> controller to acknowledge that we are caught up at 
> java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191)   at 
> kafka.testkit.KafkaClusterTestKit.startup(KafkaClusterTestKit.java:421)  
> at 
> kafka.test.junit.RaftClusterInvocationContext.lambda$getAdditionalExtensions$5(RaftClusterInvocationContext.java:116)
> at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeBeforeTestExecutionCallbacks$5(TestMethodTestDescriptor.java:192)
>   at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeBeforeMethodsOrCallbacksUntilExceptionOccurs$6(TestMethodTestDescriptor.java:203)
>   at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
> at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeBeforeMethodsOrCallbacksUntilExceptionOccurs(TestMethodTestDescriptor.java:203)
>at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeBeforeTestExecutionCallbacks(TestMethodTestDescriptor.java:191)
>at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:137)
>   at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:69)
>at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151)
>at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
> at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
>at 
> org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
> at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
>at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
> at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
> at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
> at 
> org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:35)
>at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask$DefaultDynamicTestExecutor.execute(NodeTestTask.java:226)
>     at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask$DefaultDynamicTestExecutor.execute(NodeTestTask.java:204)
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16174) Flaky test: testDescribeQuorumStatusSuccessful – org.apache.kafka.tools.MetadataQuorumCommandTest

2024-04-27 Thread Igor Soarez (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Soarez updated KAFKA-16174:

Labels: flaky-test  (was: )

> Flaky test: testDescribeQuorumStatusSuccessful – 
> org.apache.kafka.tools.MetadataQuorumCommandTest
> -
>
> Key: KAFKA-16174
> URL: https://issues.apache.org/jira/browse/KAFKA-16174
> Project: Kafka
>  Issue Type: Test
>Reporter: Apoorv Mittal
>Priority: Major
>  Labels: flaky-test
>
> [https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15190/3/tests/]
>  
> {code:java}
> Errorjava.util.concurrent.ExecutionException: java.lang.RuntimeException: 
> Received a fatal error while waiting for the controller to acknowledge that 
> we are caught upStacktracejava.util.concurrent.ExecutionException: 
> java.lang.RuntimeException: Received a fatal error while waiting for the 
> controller to acknowledge that we are caught up at 
> java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191)   at 
> kafka.testkit.KafkaClusterTestKit.startup(KafkaClusterTestKit.java:421)  
> at 
> kafka.test.junit.RaftClusterInvocationContext.lambda$getAdditionalExtensions$5(RaftClusterInvocationContext.java:116)
> at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeBeforeTestExecutionCallbacks$5(TestMethodTestDescriptor.java:192)
>   at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeBeforeMethodsOrCallbacksUntilExceptionOccurs$6(TestMethodTestDescriptor.java:203)
>   at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
> at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeBeforeMethodsOrCallbacksUntilExceptionOccurs(TestMethodTestDescriptor.java:203)
>at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeBeforeTestExecutionCallbacks(TestMethodTestDescriptor.java:191)
>at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:137)
>   at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:69)
>at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151)
>at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
> at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
>at 
> org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
> at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
>at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
> at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
> at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
> at 
> org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:35)
>at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask$DefaultDynamicTestExecutor.execute(NodeTestTask.java:226)
> at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask$DefaultDynamicTestExecutor.execute(NodeTestTask.java:204)
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16383) fix flaky test IdentityReplicationIntegrationTest.testReplicateFromLatest()

2024-04-27 Thread Igor Soarez (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Soarez updated KAFKA-16383:

Labels: flaky-test  (was: )

> fix flaky test IdentityReplicationIntegrationTest.testReplicateFromLatest()
> ---
>
> Key: KAFKA-16383
> URL: https://issues.apache.org/jira/browse/KAFKA-16383
> Project: Kafka
>  Issue Type: Test
>Reporter: Johnny Hsu
>Assignee: Johnny Hsu
>Priority: Major
>  Labels: flaky-test
>
> Build link: 
> [https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-15463/4/testReport/junit/org.apache.kafka.connect.mirror.integration/IdentityReplicationIntegrationTest/Build___JDK_11_and_Scala_2_13___testReplicateFromLatest__/]
>  
> This test failed in build in several PR, which is flaky



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16634) Flaky test - testFenceMultipleBrokers() – org.apache.kafka.controller.QuorumControllerTest

2024-04-27 Thread Igor Soarez (Jira)
Igor Soarez created KAFKA-16634:
---

 Summary: Flaky test - testFenceMultipleBrokers() – 
org.apache.kafka.controller.QuorumControllerTest
 Key: KAFKA-16634
 URL: https://issues.apache.org/jira/browse/KAFKA-16634
 Project: Kafka
  Issue Type: Test
Reporter: Igor Soarez
 Attachments: output.txt

testFenceMultipleBrokers() – org.apache.kafka.controller.QuorumControllerTest

Error:
{code:java}
java.util.concurrent.TimeoutException: testFenceMultipleBrokers() timed out 
after 40 seconds {code}
Test logs in attached output.txt

https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15816/1/tests/



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16633) Flaky test - testDescribeExistingGroupWithNoMembers(String, String).quorum=kraft+kip848.groupProtocol=consumer – org.apache.kafka.tools.consumer.group.DescribeConsumerGr

2024-04-27 Thread Igor Soarez (Jira)
Igor Soarez created KAFKA-16633:
---

 Summary: Flaky test - 
testDescribeExistingGroupWithNoMembers(String, 
String).quorum=kraft+kip848.groupProtocol=consumer – 
org.apache.kafka.tools.consumer.group.DescribeConsumerGroupTest
 Key: KAFKA-16633
 URL: https://issues.apache.org/jira/browse/KAFKA-16633
 Project: Kafka
  Issue Type: Test
Reporter: Igor Soarez


testDescribeExistingGroupWithNoMembers(String, 
String).quorum=kraft+kip848.groupProtocol=consumer – 
org.apache.kafka.tools.consumer.group.DescribeConsumerGroupTest
{code:java}
org.opentest4j.AssertionFailedError: Condition not met within timeout 15000. 
Expected no active member in describe group results with describe type 
--offsets ==> expected:  but was:  at 
app//org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
   at 
app//org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
   at app//org.junit.jupiter.api.AssertTrue.failNotTrue(AssertTrue.java:63) 
   at app//org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:36) at 
app//org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:214)at 
app//org.apache.kafka.test.TestUtils.lambda$waitForCondition$3(TestUtils.java:396)
   at 
app//org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:444)
 at app//org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:393)   
 at app//org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:377)   
 at app//org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:350)   
 at 
app//org.apache.kafka.tools.consumer.group.DescribeConsumerGroupTest.testDescribeExistingGroupWithNoMembers(DescribeConsumerGroupTest.java:430)
 {code}
 

https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15816/1/tests



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16632) Flaky test testDeleteOffsetsOfStableConsumerGroupWithTopicPartition [1] Type=Raft-Isolated, MetadataVersion=3.8-IV0, Security=PLAINTEXT – org.apache.kafka.tools.consumer

2024-04-27 Thread Igor Soarez (Jira)
Igor Soarez created KAFKA-16632:
---

 Summary: Flaky test 
testDeleteOffsetsOfStableConsumerGroupWithTopicPartition [1] 
Type=Raft-Isolated, MetadataVersion=3.8-IV0, Security=PLAINTEXT – 
org.apache.kafka.tools.consumer.group.DeleteOffsetsConsumerGroupCommandIntegrationTest
 Key: KAFKA-16632
 URL: https://issues.apache.org/jira/browse/KAFKA-16632
 Project: Kafka
  Issue Type: Test
Reporter: Igor Soarez


testDeleteOffsetsOfStableConsumerGroupWithTopicPartition [1] 
Type=Raft-Isolated, MetadataVersion=3.8-IV0, Security=PLAINTEXT – 
org.apache.kafka.tools.consumer.group.DeleteOffsetsConsumerGroupCommandIntegrationTest

 
{code:java}
org.opentest4j.AssertionFailedError: expected: not equal but was: <0>   at 
app//org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:152)
   at 
app//org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
   at 
app//org.junit.jupiter.api.AssertNotEquals.failEqual(AssertNotEquals.java:277)  
 at 
app//org.junit.jupiter.api.AssertNotEquals.assertNotEquals(AssertNotEquals.java:94)
  at 
app//org.junit.jupiter.api.AssertNotEquals.assertNotEquals(AssertNotEquals.java:86)
  at 
app//org.junit.jupiter.api.Assertions.assertNotEquals(Assertions.java:1981)  at 
app//org.apache.kafka.tools.consumer.group.DeleteOffsetsConsumerGroupCommandIntegrationTest.withConsumerGroup(DeleteOffsetsConsumerGroupCommandIntegrationTest.java:213)
 at 
app//org.apache.kafka.tools.consumer.group.DeleteOffsetsConsumerGroupCommandIntegrationTest.testWithConsumerGroup(DeleteOffsetsConsumerGroupCommandIntegrationTest.java:179)
 at 
app//org.apache.kafka.tools.consumer.group.DeleteOffsetsConsumerGroupCommandIntegrationTest.testDeleteOffsetsOfStableConsumerGroupWithTopicPartition(DeleteOffsetsConsumerGroupCommandIntegrationTest.java:96)
{code}
https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15816/1/tests



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16631) Flaky test - testDeleteOffsetsOfStableConsumerGroupWithTopicOnly [1] Type=Raft-Isolated, MetadataVersion=3.8-IV0, Security=PLAINTEXT – org.apache.kafka.tools.consumer.gr

2024-04-27 Thread Igor Soarez (Jira)
Igor Soarez created KAFKA-16631:
---

 Summary: Flaky test - 
testDeleteOffsetsOfStableConsumerGroupWithTopicOnly [1] Type=Raft-Isolated, 
MetadataVersion=3.8-IV0, Security=PLAINTEXT – 
org.apache.kafka.tools.consumer.group.DeleteOffsetsConsumerGroupCommandIntegrationTest
 Key: KAFKA-16631
 URL: https://issues.apache.org/jira/browse/KAFKA-16631
 Project: Kafka
  Issue Type: Test
Reporter: Igor Soarez


testDeleteOffsetsOfStableConsumerGroupWithTopicOnly [1] Type=Raft-Isolated, 
MetadataVersion=3.8-IV0, Security=PLAINTEXT – 
org.apache.kafka.tools.consumer.group.DeleteOffsetsConsumerGroupCommandIntegrationTest

 
{code:java}
org.opentest4j.AssertionFailedError: expected: not equal but was: <0>   at 
app//org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:152)
   at 
app//org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
   at 
app//org.junit.jupiter.api.AssertNotEquals.failEqual(AssertNotEquals.java:277)  
 at 
app//org.junit.jupiter.api.AssertNotEquals.assertNotEquals(AssertNotEquals.java:94)
  at 
app//org.junit.jupiter.api.AssertNotEquals.assertNotEquals(AssertNotEquals.java:86)
  at 
app//org.junit.jupiter.api.Assertions.assertNotEquals(Assertions.java:1981)  at 
app//org.apache.kafka.tools.consumer.group.DeleteOffsetsConsumerGroupCommandIntegrationTest.withConsumerGroup(DeleteOffsetsConsumerGroupCommandIntegrationTest.java:213)
 at 
app//org.apache.kafka.tools.consumer.group.DeleteOffsetsConsumerGroupCommandIntegrationTest.testWithConsumerGroup(DeleteOffsetsConsumerGroupCommandIntegrationTest.java:179)
 at 
app//org.apache.kafka.tools.consumer.group.DeleteOffsetsConsumerGroupCommandIntegrationTest.testDeleteOffsetsOfStableConsumerGroupWithTopicOnly(DeleteOffsetsConsumerGroupCommandIntegrationTest.java:105)
{code}
https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15816/1/tests



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16630) Flaky test "testPollReturnsRecords(GroupProtocol).groupProtocol=CLASSIC" – org.apache.kafka.clients.consumer.KafkaConsumerTest

2024-04-27 Thread Igor Soarez (Jira)
Igor Soarez created KAFKA-16630:
---

 Summary: Flaky test 
"testPollReturnsRecords(GroupProtocol).groupProtocol=CLASSIC" – 
org.apache.kafka.clients.consumer.KafkaConsumerTest
 Key: KAFKA-16630
 URL: https://issues.apache.org/jira/browse/KAFKA-16630
 Project: Kafka
  Issue Type: Test
Reporter: Igor Soarez


"testPollReturnsRecords(GroupProtocol).groupProtocol=CLASSIC" – 
org.apache.kafka.clients.consumer.KafkaConsumerTest

 
{code:java}
org.opentest4j.AssertionFailedError: expected: <0> but was: <5> at 
app//org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
   at 
app//org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
   at 
app//org.junit.jupiter.api.AssertEquals.failNotEqual(AssertEquals.java:197)  at 
app//org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:150)  at 
app//org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:145)  at 
app//org.junit.jupiter.api.Assertions.assertEquals(Assertions.java:531)  at 
app//org.apache.kafka.clients.consumer.KafkaConsumerTest.testPollReturnsRecords(KafkaConsumerTest.java:289)
 {code}
[https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15816/1/tests]

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (KAFKA-16622) Mirromaker2 first Checkpoint not emitted until consumer group fully catches up once

2024-04-27 Thread Edoardo Comar (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841137#comment-17841137
 ] 

Edoardo Comar edited comment on KAFKA-16622 at 4/27/24 10:55 AM:
-

if after the consumer reaches 1 and the 1st checkpoint is emitted, MM2 
restarts before the other 1 messages are produced,
then bug https://issues.apache.org/jira/browse/KAFKA-15905 hits and we end up 
with just two checkpoints, at 1 and 2.

but the problem here is that if the consumer never fully catches up once, we 
will never have a checkpoint.

If as initial state the 
{color:#00}OffsetSyncStore.{color}{color:#871094}offsetSyncs 
{color}contained a distribution of {color:#00}OffsetSync rather than just 
multiple copies of the last {color}{color:#00}OffsetSync , Checkpoints 
would be computed earlier I think{color}

{color:#00} {color}


was (Author: ecomar):
if after the consumer reaches 1 and the 1st checkpoint is emitted, MM2 
restarts before the other 1 messages are produced,
then bug https://issues.apache.org/jira/browse/KAFKA-15905 hits and we end up 
with just two checkpoints, at 1 and 2.

but the problem here is that if the consumer never fully catches up once, we 
will never have a checkpoint.

If the {color:#00}OffsetSyncStore.{color}{color:#871094}offsetSyncs 
{color}contained a distribution of {color:#00}OffsetSync rather than just 
multiple copies of the last {color}{color:#00}OffsetSync , Checkpoints 
would be computed before
{color}

{color:#00} {color}

> Mirromaker2 first Checkpoint not emitted until consumer group fully catches 
> up once
> ---
>
> Key: KAFKA-16622
> URL: https://issues.apache.org/jira/browse/KAFKA-16622
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 3.7.0, 3.6.2, 3.8.0
>Reporter: Edoardo Comar
>Priority: Major
> Attachments: connect.log.2024-04-26-10.zip, 
> edo-connect-mirror-maker-sourcetarget.properties
>
>
> We observed an excessively delayed emission of the MM2 Checkpoint record.
> It only gets created when the source consumer reaches the end of a topic. 
> This does not seem reasonable.
> In a very simple setup :
> Tested with a standalone single process MirrorMaker2 mirroring between two 
> single-node kafka clusters(mirromaker config attached) with quick refresh 
> intervals (eg 5 sec) and a small offset.lag.max (eg 10)
> create a single topic in the source cluster
> produce data to it (e.g. 1 records)
> start a slow consumer - e.g. fetching 50records/poll and pausing 1 sec 
> between polls which commits after each poll
> watch the Checkpoint topic in the target cluster
> bin/kafka-console-consumer.sh --bootstrap-server localhost:9192 \
>   --topic source.checkpoints.internal \
>   --formatter org.apache.kafka.connect.mirror.formatters.CheckpointFormatter \
>--from-beginning
> -> no record appears in the checkpoint topic until the consumer reaches the 
> end of the topic (ie its consumer group lag gets down to 0).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16606) JBOD support in KRaft does not seem to be gated by the metadata version

2024-04-27 Thread Igor Soarez (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Soarez updated KAFKA-16606:

Component/s: config
 core

> JBOD support in KRaft does not seem to be gated by the metadata version
> ---
>
> Key: KAFKA-16606
> URL: https://issues.apache.org/jira/browse/KAFKA-16606
> Project: Kafka
>  Issue Type: Bug
>  Components: config, core
>Affects Versions: 3.7.0
>Reporter: Jakub Scholz
>Assignee: Igor Soarez
>Priority: Major
> Fix For: 3.8.0, 3.7.1
>
>
> JBOD support in KRaft should be supported since Kafka 3.7.0. The Kafka 
> [source 
> code|https://github.com/apache/kafka/blob/1b301b30207ed8fca9f0aea5cf940b0353a1abca/server-common/src/main/java/org/apache/kafka/server/common/MetadataVersion.java#L194-L195]
>  suggests that it is supported with the metadata version {{{}3.7-IV2{}}}. 
> However, it seems to be possible to run KRaft cluster with JBOD even with 
> older metadata versions such as {{{}3.6{}}}. For example, I have a cluster 
> using the {{3.6}} metadata version:
> {code:java}
> bin/kafka-features.sh --bootstrap-server localhost:9092 describe
> Feature: metadata.version       SupportedMinVersion: 3.0-IV1    
> SupportedMaxVersion: 3.7-IV4    FinalizedVersionLevel: 3.6-IV2  Epoch: 1375 
> {code}
> Yet a KRaft cluster with JBOD seems to run fine:
> {code:java}
> bin/kafka-log-dirs.sh --bootstrap-server localhost:9092 --describe
> Querying brokers for log directories information
> Received log directory information from brokers 2000,3000,1000
> {"brokers":[{"broker":2000,"logDirs":[{"partitions":[{"partition":"__consumer_offsets-13","size":0,"offsetLag":0,"isFuture":false},{"partition":"__consumer_offsets-46","size":0,"offsetLag":0,"isFuture":false},{"partition":"kafka-test-apps-0","size":28560,"offsetLag":0,"isFuture":false},{"partition":"__consumer_offsets-9","size":0,"offsetLag":0,"isFuture":false},{"partition":"__consumer_offsets-42","size":0,"offsetLag":0,"isFuture":false},{"partition":"__consumer_offsets-21","size":0,"offsetLag":0,"isFuture":false},{"partition":"__consumer_offsets-17","size":0,"offsetLag":0,"isFuture":false},{"partition":"__consumer_offsets-30","size":0,"offsetLag":0,"isFuture":false},{"partition":"__consumer_offsets-26","size":0,"offsetLag":0,"isFuture":false},{"partition":"__consumer_offsets-5","size":0,"offsetLag":0,"isFuture":false},{"partition":"__consumer_offsets-38","size":0,"offsetLag":0,"isFuture":false},{"partition":"__consumer_offsets-1","size":0,"offsetLag":0,"isFuture":false},{"partition":"__consumer_offsets-34","size":0,"offsetLag":0,"isFuture":false},{"partition":"__consumer_offsets-16","size":0,"offsetLag":0,"isFuture":false},{"partition":"__consumer_offsets-45","size":0,"offsetLag":0,"isFuture":false},{"partition":"__consumer_offsets-12","size":0,"offsetLag":0,"isFuture":false},{"partition":"__consumer_offsets-41","size":0,"offsetLag":0,"isFuture":false},{"partition":"__consumer_offsets-24","size":0,"offsetLag":0,"isFuture":false},{"partition":"__consumer_offsets-20","size":0,"offsetLag":0,"isFuture":false},{"partition":"__consumer_offsets-49","size":0,"offsetLag":0,"isFuture":false},{"partition":"__consumer_offsets-0","size":0,"offsetLag":0,"isFuture":false},{"partition":"__consumer_offsets-29","size":0,"offsetLag":0,"isFuture":false},{"partition":"__consumer_offsets-25","size":0,"offsetLag":0,"isFuture":false},{"partition":"__consumer_offsets-8","size":0,"of

[jira] [Updated] (KAFKA-16606) JBOD support in KRaft does not seem to be gated by the metadata version

2024-04-27 Thread Igor Soarez (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Soarez updated KAFKA-16606:

Fix Version/s: 3.8.0
   3.7.1

> JBOD support in KRaft does not seem to be gated by the metadata version
> ---
>
> Key: KAFKA-16606
> URL: https://issues.apache.org/jira/browse/KAFKA-16606
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.7.0
>Reporter: Jakub Scholz
>Assignee: Igor Soarez
>Priority: Major
> Fix For: 3.8.0, 3.7.1
>
>
> JBOD support in KRaft should be supported since Kafka 3.7.0. The Kafka 
> [source 
> code|https://github.com/apache/kafka/blob/1b301b30207ed8fca9f0aea5cf940b0353a1abca/server-common/src/main/java/org/apache/kafka/server/common/MetadataVersion.java#L194-L195]
>  suggests that it is supported with the metadata version {{{}3.7-IV2{}}}. 
> However, it seems to be possible to run KRaft cluster with JBOD even with 
> older metadata versions such as {{{}3.6{}}}. For example, I have a cluster 
> using the {{3.6}} metadata version:
> {code:java}
> bin/kafka-features.sh --bootstrap-server localhost:9092 describe
> Feature: metadata.version       SupportedMinVersion: 3.0-IV1    
> SupportedMaxVersion: 3.7-IV4    FinalizedVersionLevel: 3.6-IV2  Epoch: 1375 
> {code}
> Yet a KRaft cluster with JBOD seems to run fine:
> {code:java}
> bin/kafka-log-dirs.sh --bootstrap-server localhost:9092 --describe
> Querying brokers for log directories information
> Received log directory information from brokers 2000,3000,1000
> {"brokers":[{"broker":2000,"logDirs":[{"partitions":[{"partition":"__consumer_offsets-13","size":0,"offsetLag":0,"isFuture":false},{"partition":"__consumer_offsets-46","size":0,"offsetLag":0,"isFuture":false},{"partition":"kafka-test-apps-0","size":28560,"offsetLag":0,"isFuture":false},{"partition":"__consumer_offsets-9","size":0,"offsetLag":0,"isFuture":false},{"partition":"__consumer_offsets-42","size":0,"offsetLag":0,"isFuture":false},{"partition":"__consumer_offsets-21","size":0,"offsetLag":0,"isFuture":false},{"partition":"__consumer_offsets-17","size":0,"offsetLag":0,"isFuture":false},{"partition":"__consumer_offsets-30","size":0,"offsetLag":0,"isFuture":false},{"partition":"__consumer_offsets-26","size":0,"offsetLag":0,"isFuture":false},{"partition":"__consumer_offsets-5","size":0,"offsetLag":0,"isFuture":false},{"partition":"__consumer_offsets-38","size":0,"offsetLag":0,"isFuture":false},{"partition":"__consumer_offsets-1","size":0,"offsetLag":0,"isFuture":false},{"partition":"__consumer_offsets-34","size":0,"offsetLag":0,"isFuture":false},{"partition":"__consumer_offsets-16","size":0,"offsetLag":0,"isFuture":false},{"partition":"__consumer_offsets-45","size":0,"offsetLag":0,"isFuture":false},{"partition":"__consumer_offsets-12","size":0,"offsetLag":0,"isFuture":false},{"partition":"__consumer_offsets-41","size":0,"offsetLag":0,"isFuture":false},{"partition":"__consumer_offsets-24","size":0,"offsetLag":0,"isFuture":false},{"partition":"__consumer_offsets-20","size":0,"offsetLag":0,"isFuture":false},{"partition":"__consumer_offsets-49","size":0,"offsetLag":0,"isFuture":false},{"partition":"__consumer_offsets-0","size":0,"offsetLag":0,"isFuture":false},{"partition":"__consumer_offsets-29","size":0,"offsetLag":0,"isFuture":false},{"partition":"__consumer_offsets-25","size":0,"offsetLag":0,"isFuture":false},{"partition":"__consumer_offsets-8","size":0,"offsetLag":0,"isFuture&quo

[jira] [Commented] (KAFKA-15743) KRaft support in ReplicationQuotasTest

2024-04-27 Thread Igor Soarez (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841411#comment-17841411
 ] 

Igor Soarez commented on KAFKA-15743:
-

[~pprovenzano] that's right. When a broker re-registers with different 
directory UUIDs the controller will mark the partitions offline, update ISR and 
re-elect leaders. Would you like to open a PR to address this?

> KRaft support in ReplicationQuotasTest
> --
>
> Key: KAFKA-15743
> URL: https://issues.apache.org/jira/browse/KAFKA-15743
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Sameer Tejani
>Assignee: Dmitry Werner
>Priority: Minor
>  Labels: kraft, kraft-test, newbie
> Fix For: 3.8.0
>
>
> The following tests in ReplicationQuotasTest in 
> core/src/test/scala/unit/kafka/server/ReplicationQuotasTest.scala need to be 
> updated to support KRaft
> 59 : def shouldBootstrapTwoBrokersWithLeaderThrottle(): Unit = {
> 64 : def shouldBootstrapTwoBrokersWithFollowerThrottle(): Unit = {
> 171 : def shouldThrottleOldSegments(): Unit = {
> Scanned 240 lines. Found 0 KRaft tests out of 3 tests



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (KAFKA-14892) Harmonize package names in storage module

2024-04-26 Thread Giuseppe Calabrese (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841398#comment-17841398
 ] 

Giuseppe Calabrese edited comment on KAFKA-14892 at 4/27/24 5:16 AM:
-

[~abhijeetkumar] [~chia7712]  any ideas about it? **


was (Author: JIRAUSER305174):
[~abhijeetkumar] [~chia7712]  any ideas about it? **

> Harmonize package names in storage module
> -
>
> Key: KAFKA-14892
> URL: https://issues.apache.org/jira/browse/KAFKA-14892
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Priority: Major
> Fix For: 3.8.0
>
>
> We currently have:
>  # org.apache.kafka.server.log.remote.storage: public api in storage-api 
> module
>  # org.apache.kafka.server.log.remote: private api in storage module
>  # org.apache.kafka.storage.internals.log: private api in storage module
> A way to make this consistent could be:
>  # org.apache.kafka.storage.* or org.apache.kafka.storage.api.*: public api 
> in storage-api module
>  # org.apache.kafka.storage.internals.log.remote: private api in storage 
> module
>  # org.apache.kafka.storage.internals.log: private api in storage module 
> (stays the same)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-14892) Harmonize package names in storage module

2024-04-26 Thread Giuseppe Calabrese (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841398#comment-17841398
 ] 

Giuseppe Calabrese commented on KAFKA-14892:


[~abhijeetkumar] [~chia7712]  any ideas about it? **

> Harmonize package names in storage module
> -
>
> Key: KAFKA-14892
> URL: https://issues.apache.org/jira/browse/KAFKA-14892
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Priority: Major
> Fix For: 3.8.0
>
>
> We currently have:
>  # org.apache.kafka.server.log.remote.storage: public api in storage-api 
> module
>  # org.apache.kafka.server.log.remote: private api in storage module
>  # org.apache.kafka.storage.internals.log: private api in storage module
> A way to make this consistent could be:
>  # org.apache.kafka.storage.* or org.apache.kafka.storage.api.*: public api 
> in storage-api module
>  # org.apache.kafka.storage.internals.log.remote: private api in storage 
> module
>  # org.apache.kafka.storage.internals.log: private api in storage module 
> (stays the same)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-16629) add broker-related tests to ConfigCommandIntegrationTest

2024-04-26 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai reassigned KAFKA-16629:
--

Assignee: 黃竣陽  (was: Chia-Ping Tsai)

> add broker-related tests to ConfigCommandIntegrationTest
> 
>
> Key: KAFKA-16629
> URL: https://issues.apache.org/jira/browse/KAFKA-16629
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: 黃竣陽
>Priority: Major
>
> [https://github.com/apache/kafka/pull/15645] will rewrite the 
> ConfigCommandIntegrationTest by java and new test infra. However, it still 
> lacks of enough tests for broker-related configs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16629) add broker-related tests to ConfigCommandIntegrationTest

2024-04-26 Thread Jira


[ 
https://issues.apache.org/jira/browse/KAFKA-16629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841391#comment-17841391
 ] 

黃竣陽 commented on KAFKA-16629:
-

I will handle this issue.

> add broker-related tests to ConfigCommandIntegrationTest
> 
>
> Key: KAFKA-16629
> URL: https://issues.apache.org/jira/browse/KAFKA-16629
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Major
>
> [https://github.com/apache/kafka/pull/15645] will rewrite the 
> ConfigCommandIntegrationTest by java and new test infra. However, it still 
> lacks of enough tests for broker-related configs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-6527) Transient failure in DynamicBrokerReconfigurationTest.testDefaultTopicConfig

2024-04-26 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-6527.
---
Resolution: Fixed

It is enabled by https://github.com/apache/kafka/pull/15796

> Transient failure in DynamicBrokerReconfigurationTest.testDefaultTopicConfig
> 
>
> Key: KAFKA-6527
> URL: https://issues.apache.org/jira/browse/KAFKA-6527
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: TaiJuWu
>Priority: Blocker
>  Labels: flakey, flaky-test
> Fix For: 3.8.0
>
>
> {code:java}
> java.lang.AssertionError: Log segment size increase not applied
>   at kafka.utils.TestUtils$.fail(TestUtils.scala:355)
>   at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:865)
>   at 
> kafka.server.DynamicBrokerReconfigurationTest.testDefaultTopicConfig(DynamicBrokerReconfigurationTest.scala:348)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-6527) Transient failure in DynamicBrokerReconfigurationTest.testDefaultTopicConfig

2024-04-26 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai reassigned KAFKA-6527:
-

Assignee: TaiJuWu

> Transient failure in DynamicBrokerReconfigurationTest.testDefaultTopicConfig
> 
>
> Key: KAFKA-6527
> URL: https://issues.apache.org/jira/browse/KAFKA-6527
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: TaiJuWu
>Priority: Blocker
>  Labels: flakey, flaky-test
> Fix For: 3.8.0
>
>
> {code:java}
> java.lang.AssertionError: Log segment size increase not applied
>   at kafka.utils.TestUtils$.fail(TestUtils.scala:355)
>   at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:865)
>   at 
> kafka.server.DynamicBrokerReconfigurationTest.testDefaultTopicConfig(DynamicBrokerReconfigurationTest.scala:348)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16629) add broker-related tests to ConfigCommandIntegrationTest

2024-04-26 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-16629:
--

 Summary: add broker-related tests to ConfigCommandIntegrationTest
 Key: KAFKA-16629
 URL: https://issues.apache.org/jira/browse/KAFKA-16629
 Project: Kafka
  Issue Type: Improvement
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai


[https://github.com/apache/kafka/pull/15645] will rewrite the 
ConfigCommandIntegrationTest by java and new test infra. However, it still 
lacks of enough tests for broker-related configs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16514) Kafka Streams: stream.close(CloseOptions) does not respect options.leaveGroup flag.

2024-04-26 Thread A. Sophie Blee-Goldman (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841374#comment-17841374
 ] 

A. Sophie Blee-Goldman commented on KAFKA-16514:


I opened a quick example PR to showcase more clearly what I'm proposing here. 
It's definitely rather hacky, but as I said already, this functionality of not 
leaving the group when a consumer is closed was done in a hacky way to begin 
with (ie via internal consumer config introduced for use by Kafka Streams 
only). So we may as well fix this issue so the Streams closeOptions have 
correct semantics

The more I think about this the more I feel strongly that it's just silly for 
Streams users to be unable to opt-out of this "don't leave the group on close" 
behavior. It's not even possible to use the internal config since Streams 
strictly overrides it inside StreamsConfig. You can work around this by 
plugging in your own consumers via KafkaClientSupplier though that does feel a 
bit extreme. More importantly though, you'd still have to choose up front 
whether or not to leave the group on close, where you would obviously not know 
whether it makes sense to leave until you're actually calling close and know 
_why_ you're calling close (specifically, whether it's a temporary 
restart/bounce or "permanent" close)

> Kafka Streams: stream.close(CloseOptions) does not respect options.leaveGroup 
> flag.
> ---
>
> Key: KAFKA-16514
>     URL: https://issues.apache.org/jira/browse/KAFKA-16514
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 3.7.0
>Reporter: Sal Sorrentino
>Priority: Minor
>
> Working with Kafka Streams 3.7.0, but may affect earlier versions as well.
> When attempting to shutdown a streams application and leave the associated 
> consumer group, the supplied `leaveGroup` option seems to have no effect. 
> Sample code:
> {code:java}
> CloseOptions options = new CloseOptions().leaveGroup(true);
> stream.close(options);{code}
> The expected behavior here is that the group member would shutdown and leave 
> the group, immediately triggering a consumer group rebalance. In practice, 
> the rebalance happens after the appropriate timeout configuration has expired.
> I understand the default behavior in that there is an assumption that any 
> associated StateStores would be persisted to disk and that in the case of a 
> rolling restart/deployment, the rebalance delay may be preferable. However, 
> in our application we are using in-memory state stores and standby replicas. 
> There is no benefit in delaying the rebalance in this setup and we are in 
> need of a way to force a member to leave the group when shutting down.
> The workaround we found is to set an undocumented internal StreamConfig to 
> enforce this behavior:
> {code:java}
> props.put("internal.leave.group.on.close", true);
> {code}
> To state the obvious, this is less than ideal.
> Additional configuration details:
> {code:java}
> Properties props = new Properties();
> props.put(StreamsConfig.APPLICATION_ID_CONFIG, "someApplicationId");
> props.put(
> StreamsConfig.BOOTSTRAP_SERVERS_CONFIG,
> "localhost:9092,localhost:9093,localhost:9094");
> props.put(StreamsConfig.REPLICATION_FACTOR_CONFIG, 3);
> props.put(StreamsConfig.NUM_STANDBY_REPLICAS_CONFIG, 1);
> props.put(StreamsConfig.NUM_STREAM_THREADS_CONFIG, numProcessors);
> props.put(StreamsConfig.PROCESSING_GUARANTEE_CONFIG, 
> StreamsConfig.EXACTLY_ONCE_V2);{code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16514) Kafka Streams: stream.close(CloseOptions) does not respect options.leaveGroup flag.

2024-04-26 Thread A. Sophie Blee-Goldman (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841369#comment-17841369
 ] 

A. Sophie Blee-Goldman commented on KAFKA-16514:


The immutable internal config thing is definitely a bummer.

To recap: if we want to solve this so that the current Streams API – ie the 
#close(closeOptions) API – works as intended, ie for non-static members as 
well, we'd need to change the way the consumer works. Or wait for mutable 
configs, which would be nice, but realistically that's not happening soon 
enough.

To do this "right" we'd probably need to introduce a new public consumer API of 
some sort which would mean going through a KIP which could be a bit messy.

But as a slightly-hacky alternative, would it be possible to just introduce an 
internal API that works similar to the effect of the existing internal config, 
and just have Kafka Streams use that internal API without making it a "real" 
API and having to do a KIP? I mean that's basically what the internal config is 
anyways – an internal config not exposed to/intended for use by consumer 
applications and only introduced for Kafka Streams to use. Doesn't seem that 
big a deal to just switch from this immutable config to a new internal overload 
of #close (or even an internal #leaveGroupOnClose API that can be toggled 
on/off). Thoughts?

[~mjsax] [~cadonna] maybe you can raise this with someone who works on the 
clients to see if there are any concerns/make sure no one would object to this 
approach?

> Kafka Streams: stream.close(CloseOptions) does not respect options.leaveGroup 
> flag.
> ---
>
> Key: KAFKA-16514
>     URL: https://issues.apache.org/jira/browse/KAFKA-16514
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 3.7.0
>Reporter: Sal Sorrentino
>Priority: Minor
>
> Working with Kafka Streams 3.7.0, but may affect earlier versions as well.
> When attempting to shutdown a streams application and leave the associated 
> consumer group, the supplied `leaveGroup` option seems to have no effect. 
> Sample code:
> {code:java}
> CloseOptions options = new CloseOptions().leaveGroup(true);
> stream.close(options);{code}
> The expected behavior here is that the group member would shutdown and leave 
> the group, immediately triggering a consumer group rebalance. In practice, 
> the rebalance happens after the appropriate timeout configuration has expired.
> I understand the default behavior in that there is an assumption that any 
> associated StateStores would be persisted to disk and that in the case of a 
> rolling restart/deployment, the rebalance delay may be preferable. However, 
> in our application we are using in-memory state stores and standby replicas. 
> There is no benefit in delaying the rebalance in this setup and we are in 
> need of a way to force a member to leave the group when shutting down.
> The workaround we found is to set an undocumented internal StreamConfig to 
> enforce this behavior:
> {code:java}
> props.put("internal.leave.group.on.close", true);
> {code}
> To state the obvious, this is less than ideal.
> Additional configuration details:
> {code:java}
> Properties props = new Properties();
> props.put(StreamsConfig.APPLICATION_ID_CONFIG, "someApplicationId");
> props.put(
> StreamsConfig.BOOTSTRAP_SERVERS_CONFIG,
> "localhost:9092,localhost:9093,localhost:9094");
> props.put(StreamsConfig.REPLICATION_FACTOR_CONFIG, 3);
> props.put(StreamsConfig.NUM_STANDBY_REPLICAS_CONFIG, 1);
> props.put(StreamsConfig.NUM_STREAM_THREADS_CONFIG, numProcessors);
> props.put(StreamsConfig.PROCESSING_GUARANTEE_CONFIG, 
> StreamsConfig.EXACTLY_ONCE_V2);{code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-16628) Add system test for validating static consumer bounce and assignment when not eager

2024-04-26 Thread Lianet Magrans (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lianet Magrans reassigned KAFKA-16628:
--

Assignee: Lianet Magrans

> Add system test for validating static consumer bounce and assignment when not 
> eager
> ---
>
> Key: KAFKA-16628
> URL: https://issues.apache.org/jira/browse/KAFKA-16628
> Project: Kafka
>  Issue Type: Task
>  Components: consumer, system tests
>Reporter: Lianet Magrans
>Assignee: Lianet Magrans
>Priority: Major
>
> Existing system 
> [test|https://github.com/apache/kafka/blob/e7792258df934a5c8470c2925c5d164c7d5a8e6c/tests/kafkatest/tests/client/consumer_test.py#L209]
>  include a test for validating that partitions are not re-assigned when a 
> static member is bounced, but the test design and setup is intended for 
> testing this for the Eager assignment strategy only (based on the eager 
> protocol where all dynamic members revoke their partitions when a rebalance 
> happens). 
> We should considering adding a test that would ensure that partitions are not 
> re-assigned when using the cooperative sticky assignor or the new consumer 
> group protocol assignments. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16628) Add system test for validating static consumer bounce and assignment when not eager

2024-04-26 Thread Lianet Magrans (Jira)
Lianet Magrans created KAFKA-16628:
--

 Summary: Add system test for validating static consumer bounce and 
assignment when not eager
 Key: KAFKA-16628
 URL: https://issues.apache.org/jira/browse/KAFKA-16628
 Project: Kafka
  Issue Type: Task
  Components: consumer, system tests
Reporter: Lianet Magrans


Existing system 
[test|https://github.com/apache/kafka/blob/e7792258df934a5c8470c2925c5d164c7d5a8e6c/tests/kafkatest/tests/client/consumer_test.py#L209]
 include a test for validating that partitions are not re-assigned when a 
static member is bounced, but the test design and setup is intended for testing 
this for the Eager assignment strategy only (based on the eager protocol where 
all dynamic members revoke their partitions when a rebalance happens). 
We should considering adding a test that would ensure that partitions are not 
re-assigned when using the cooperative sticky assignor or the new consumer 
group protocol assignments. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (KAFKA-16528) Reset member heartbeat interval when request sent

2024-04-26 Thread Lianet Magrans (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lianet Magrans closed KAFKA-16528.
--

> Reset member heartbeat interval when request sent
> -
>
> Key: KAFKA-16528
> URL: https://issues.apache.org/jira/browse/KAFKA-16528
> Project: Kafka
>  Issue Type: Task
>  Components: clients, consumer
>Reporter: Lianet Magrans
>Assignee: Lianet Magrans
>Priority: Major
>  Labels: kip-848-client-support
> Fix For: 3.8.0
>
>
> Member should reset the heartbeat timer when the request is sent, rather than 
> when a response is received. This aims to ensure that we don't add-up to 
> interval any delay there might be in a response. With this, we better respect 
> the contract of members sending HB on the interval to remain in the group, 
> and avoid potential unwanted rebalances.   
> Note that there is already a logic in place to avoid sending a request if a 
> response hasn't been received. So that will ensure that, even with the reset 
> of the interval on the send,  the next HB will only be sent as when the 
> response is received. (Will be sent out on the next poll of the HB manager, 
> and respecting the minimal backoff for sending consecutive requests). This 
> will btw be consistent with how the interval timing & in-flights is handled 
> for auto-commits.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16609) Update parse_describe_topic to support new topic describe output

2024-04-26 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True resolved KAFKA-16609.
---
  Reviewer: Lucas Brutschy
Resolution: Fixed

> Update parse_describe_topic to support new topic describe output
> 
>
> Key: KAFKA-16609
> URL: https://issues.apache.org/jira/browse/KAFKA-16609
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, system tests
>Affects Versions: 3.8.0
>Reporter: Kirk True
>Assignee: Kirk True
>Priority: Major
>  Labels: system-test-failure
> Fix For: 3.8.0
>
>
> It appears that recent changes to the describe topic output has broken the 
> system test's ability to parse the output.
> {noformat}
> test_id:
> kafkatest.tests.core.reassign_partitions_test.ReassignPartitionsTest.test_reassign_partitions.bounce_brokers=False.reassign_from_offset_zero=True.metadata_quorum=ISOLATED_KRAFT.use_new_coordinator=True.group_protocol=consumer
> status: FAIL
> run time:   50.333 seconds
> IndexError('list index out of range')
> Traceback (most recent call last):
>   File 
> "/home/jenkins/workspace/system-test-kafka-branch-builder/kafka/venv/lib/python3.7/site-packages/ducktape/tests/runner_client.py",
>  line 184, in _do_run
> data = self.run_test()
>   File 
> "/home/jenkins/workspace/system-test-kafka-branch-builder/kafka/venv/lib/python3.7/site-packages/ducktape/tests/runner_client.py",
>  line 262, in run_test
> return self.test_context.function(self.test)
>   File 
> "/home/jenkins/workspace/system-test-kafka-branch-builder/kafka/venv/lib/python3.7/site-packages/ducktape/mark/_mark.py",
>  line 433, in wrapper
> return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs)
>   File 
> "/home/jenkins/workspace/system-test-kafka-branch-builder/kafka/tests/kafkatest/tests/core/reassign_partitions_test.py",
>  line 175, in test_reassign_partitions
> self.run_produce_consume_validate(core_test_action=lambda: 
> self.reassign_partitions(bounce_brokers))
>   File 
> "/home/jenkins/workspace/system-test-kafka-branch-builder/kafka/tests/kafkatest/tests/produce_consume_validate.py",
>  line 105, in run_produce_consume_validate
> core_test_action(*args)
>   File 
> "/home/jenkins/workspace/system-test-kafka-branch-builder/kafka/tests/kafkatest/tests/core/reassign_partitions_test.py",
>  line 175, in 
> self.run_produce_consume_validate(core_test_action=lambda: 
> self.reassign_partitions(bounce_brokers))
>   File 
> "/home/jenkins/workspace/system-test-kafka-branch-builder/kafka/tests/kafkatest/tests/core/reassign_partitions_test.py",
>  line 82, in reassign_partitions
> partition_info = 
> self.kafka.parse_describe_topic(self.kafka.describe_topic(self.topic))
>   File 
> "/home/jenkins/workspace/system-test-kafka-branch-builder/kafka/tests/kafkatest/services/kafka/kafka.py",
>  line 1400, in parse_describe_topic
> fields = list(map(lambda x: x.split(" ")[1], fields))
>   File 
> "/home/jenkins/workspace/system-test-kafka-branch-builder/kafka/tests/kafkatest/services/kafka/kafka.py",
>  line 1400, in 
> fields = list(map(lambda x: x.split(" ")[1], fields))
> IndexError: list index out of range
> {noformat} 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16584) Make log processing summary configurable or debug

2024-04-26 Thread Matthias J. Sax (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841332#comment-17841332
 ] 

Matthias J. Sax commented on KAFKA-16584:
-

There is an issue with creating new account: 
https://issues.apache.org/jira/browse/INFRA-25451 – we are waiting for infra 
team to resolve it. Unfortunately, we cannot do anything about it.

The only thing I can offer is, if you prepare a KIP using some other tool (eg 
google doc or similar) and share it with me, and I can c it in the wiki for 
you.

> Make log processing summary configurable or debug
> -
>
> Key: KAFKA-16584
> URL: https://issues.apache.org/jira/browse/KAFKA-16584
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 3.6.2
>Reporter: Andras Hatvani
>Assignee: dujian
>Priority: Major
>  Labels: needs-kip, newbie
>
> Currently *every two minutes for every stream thread* statistics will be 
> logged on INFO log level. 
> {code}
> 2024-04-18T09:18:23.790+02:00  INFO 33178 --- [service] [-StreamThread-1] 
> o.a.k.s.p.internals.StreamThread         : stream-thread 
> [service-149405a3-c7e3-4505-8bbd-c3bff226b115-StreamThread-1] Processed 0 
> total records, ran 0 punctuators, and committed 0 total tasks since the last 
> update {code}
> This is absolutely unnecessary and even harmful since it fills the logs and 
> thus storage space with unwanted and useless data. Otherwise the INFO logs 
> are useful and helpful, therefore it's not an option to raise the log level 
> to WARN.
> Please make the logProcessingSummary 
> * either to a DEBUG level log or
> * make it configurable so that it can be disabled.
> This is the relevant code: 
> https://github.com/apache/kafka/blob/aee9724ee15ed539ae73c09cc2c2eda83ae3c864/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamThread.java#L1073



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (KAFKA-16622) Mirromaker2 first Checkpoint not emitted until consumer group fully catches up once

2024-04-26 Thread Edoardo Comar (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841137#comment-17841137
 ] 

Edoardo Comar edited comment on KAFKA-16622 at 4/26/24 6:04 PM:


if after the consumer reaches 1 and the 1st checkpoint is emitted, MM2 
restarts before the other 1 messages are produced,
then bug https://issues.apache.org/jira/browse/KAFKA-15905 hits and we end up 
with just two checkpoints, at 1 and 2.

but the problem here is that if the consumer never fully catches up once, we 
will never have a checkpoint.

If the {color:#00}OffsetSyncStore.{color}{color:#871094}offsetSyncs 
{color}contained a distribution of {color:#00}OffsetSync rather than just 
multiple copies of the last {color}{color:#00}OffsetSync , Checkpoints 
would be computed before
{color}

{color:#00} {color}


was (Author: ecomar):
if after the consumer reaches 1 and the 1st checkpoint is emitted, MM2 
restarts before the other 1 messages are produced,
then bug https://issues.apache.org/jira/browse/KAFKA-15905 hits and we end up 
with just two checkpoints, at 1 and 2.

 

but the problem here is that if the consumer never fully cathces up once, we 
will never have a checkpoint

> Mirromaker2 first Checkpoint not emitted until consumer group fully catches 
> up once
> ---
>
> Key: KAFKA-16622
> URL: https://issues.apache.org/jira/browse/KAFKA-16622
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 3.7.0, 3.6.2, 3.8.0
>Reporter: Edoardo Comar
>Priority: Major
> Attachments: connect.log.2024-04-26-10.zip, 
> edo-connect-mirror-maker-sourcetarget.properties
>
>
> We observed an excessively delayed emission of the MM2 Checkpoint record.
> It only gets created when the source consumer reaches the end of a topic. 
> This does not seem reasonable.
> In a very simple setup :
> Tested with a standalone single process MirrorMaker2 mirroring between two 
> single-node kafka clusters(mirromaker config attached) with quick refresh 
> intervals (eg 5 sec) and a small offset.lag.max (eg 10)
> create a single topic in the source cluster
> produce data to it (e.g. 1 records)
> start a slow consumer - e.g. fetching 50records/poll and pausing 1 sec 
> between polls which commits after each poll
> watch the Checkpoint topic in the target cluster
> bin/kafka-console-consumer.sh --bootstrap-server localhost:9192 \
>   --topic source.checkpoints.internal \
>   --formatter org.apache.kafka.connect.mirror.formatters.CheckpointFormatter \
>--from-beginning
> -> no record appears in the checkpoint topic until the consumer reaches the 
> end of the topic (ie its consumer group lag gets down to 0).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-15342) Considering upgrading to Mockito 5.4.1 or later

2024-04-26 Thread Chia-Ping Tsai (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841294#comment-17841294
 ] 

Chia-Ping Tsai commented on KAFKA-15342:


not sure the benefit of having unit test and integration test. Our CI always 
run all tests regardless of fast failure. The difference of JVM setting is 
`maxHeapSize` (2g v.s 2560m), but I don't think that is something important. 
Also, we often neglect the tag in testing. 

Maybe we should just get rid of the category :_

 

> Considering upgrading to Mockito 5.4.1 or later
> ---
>
> Key: KAFKA-15342
> URL: https://issues.apache.org/jira/browse/KAFKA-15342
> Project: Kafka
>  Issue Type: Task
>  Components: unit tests
>Reporter: Chris Egerton
>Assignee: Chris Egerton
>Priority: Blocker
> Fix For: 4.0.0
>
>
> We're currently stuck on Mockito 4.x.y because the 5.x.y line requires Java 
> 11 and, until we begin to work on Kafka 4.0.0, we continue to support Java 8.
> Either directly before, or after releasing Kafka 4.0.0, we should try to 
> upgrade to a version of Mockito on the 5.x.y line.
> If we're able to use a version that includes 
> [https://github.com/mockito/mockito/pull/3078|https://github.com/mockito/mockito/pull/3078,]
>  (which should be included in either a 5.4.1 or 5.5.0 release), we should 
> also revert the change made for 
> https://issues.apache.org/jira/browse/KAFKA-14682, which is just a temporary 
> workaround. Care should be taken that, after reverting that change, unused 
> stubbings are still correctly reported during our CI builds.
> If the effort required to upgrade our Mockito version is too high, we can 
> either downgrade the severity of this ticket, or split it out into separate 
> subtasks for each to-be-upgraded module.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-14892) Harmonize package names in storage module

2024-04-26 Thread Giuseppe Calabrese (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841254#comment-17841254
 ] 

Giuseppe Calabrese commented on KAFKA-14892:


Hi, I'm new in ASF.

I wonder if this ticket could be assigned to me.

Please let me know. Thanks

> Harmonize package names in storage module
> -
>
> Key: KAFKA-14892
> URL: https://issues.apache.org/jira/browse/KAFKA-14892
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Priority: Major
> Fix For: 3.8.0
>
>
> We currently have:
>  # org.apache.kafka.server.log.remote.storage: public api in storage-api 
> module
>  # org.apache.kafka.server.log.remote: private api in storage module
>  # org.apache.kafka.storage.internals.log: private api in storage module
> A way to make this consistent could be:
>  # org.apache.kafka.storage.* or org.apache.kafka.storage.api.*: public api 
> in storage-api module
>  # org.apache.kafka.storage.internals.log.remote: private api in storage 
> module
>  # org.apache.kafka.storage.internals.log: private api in storage module 
> (stays the same)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-16611) Consider adding test name to "client.id" of Admin in testing

2024-04-26 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai reassigned KAFKA-16611:
--

Assignee: PoAn Yang  (was: Chia-Ping Tsai)

> Consider adding test name to "client.id" of Admin in testing
> 
>
> Key: KAFKA-16611
> URL: https://issues.apache.org/jira/browse/KAFKA-16611
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: PoAn Yang
>Priority: Minor
>
> I observed following errors many times.
> {quote}
> org.opentest4j.AssertionFailedError: Found 16 unexpected threads during 
> @BeforeAll: `kafka-admin-client-thread | 
> adminclient-287,kafka-admin-client-thread | 
> adminclient-276,kafka-admin-client-thread | 
> adminclient-271,kafka-admin-client-thread | 
> adminclient-293,kafka-admin-client-thread | 
> adminclient-281,kafka-admin-client-thread | 
> adminclient-302,kafka-admin-client-thread | 
> adminclient-334,kafka-admin-client-thread | 
> adminclient-323,kafka-admin-client-thread | 
> adminclient-257,kafka-admin-client-thread | 
> adminclient-336,kafka-admin-client-thread | 
> adminclient-308,kafka-admin-client-thread | 
> adminclient-263,kafka-admin-client-thread | 
> adminclient-273,kafka-admin-client-thread | 
> adminclient-278,kafka-admin-client-thread | 
> adminclient-283,kafka-admin-client-thread | adminclient-317` ==> expected: 
>  but was: 
> {quote}
> That could be caused by exceptional shutdown. Or we do have resource leaks in 
> some failed tests. Adding the test name to "client.id" can give hints about 
> that



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15838) [Connect] ExtractField and InsertField NULL Values are replaced by default value even in NULLABLE fields

2024-04-26 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison updated KAFKA-15838:
---
Component/s: connect

> [Connect] ExtractField and InsertField NULL Values are replaced by default 
> value even in NULLABLE fields
> 
>
> Key: KAFKA-15838
> URL: https://issues.apache.org/jira/browse/KAFKA-15838
> Project: Kafka
>  Issue Type: Bug
>  Components: connect
>Reporter: Eric Pangiawan
>Assignee: Mario Fiore Vitale
>Priority: Major
>
> ExtractField: Line 116-119
> [https://github.com/a0x8o/kafka/blob/master/connect/transforms/src/main/java/org/apache/kafka/connect/transforms/ExtractField.java#L61-L68]
> InsertField: Line 163 - 195
> [https://github.com/a0x8o/kafka/blob/master/connect/transforms/src/main/java/org/apache/kafka/connect/transforms/InsertField.java#L163-L195]
> h1. Expect:
> Value `null` is valid for an optional filed, even though the field has a 
> default value.
> Only when field is required, the class return default value fallback when 
> value is `null`.
> h1. Actual:
> Always return default value if `null` was given.
> h1. Example:
> PostgreSQL DDL:
> {code:java}
> CREATE TABLE products(
>     id varchar(255),
>     color varchar(255),
>     quantity float8
> );
> -- Set Default
> ALTER TABLE products ALTER COLUMN quantity SET  DEFAULT 1.0; {code}
> Insert A Record:
> {code:java}
> INSERT INTO public.products VALUES('1', 'Blue', null); {code}
> Table Select *:
> {code:java}
>  id | color | quantity
> +---+--
>  1  | Blue  | {code}
> Debezium Behavior when using ExtractField and InsertField class (in the event 
> flattening SMT):
> {code:java}
> {
>     "id":"1",
>     "color":"Blue",
>     "quantity":1.0,
>     "__op":"c",
>     "__ts_ms":1698127432079,
>     "__source_ts_ms":1698127431679,
>     "__db":"testing_db",
>     "__schema":"public",
>     "__table":"products",
>     "__lsn":24470112,
>     "__txId":957,
>     "__snapshot":null,
>     "__deleted":"false"
> } {code}
> The debezium code can be found 
> [here|https://github.com/debezium/debezium/blob/2.4/debezium-core/src/main/java/io/debezium/transforms/ExtractNewRecordState.java#L116-L119]
> h1. Expected Output:
> {code:java}
> {
>     "id":"1",
>     "color":"Blue",
>     "quantity":null,
>     "__op":"c",
>     "__ts_ms":1698127432079,
>     "__source_ts_ms":1698127431679,
>     "__db":"testing_db",
>     "__schema":"public",
>     "__table":"products",
>     "__lsn":24470112,
>     "__txId":957,
>     "__snapshot":null,
>     "__deleted":"false"
> }{code}
> h1. Temporary Solution:
> use getWithoutDefault() into ExtractField and InsertField instead of get()



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16611) Consider adding test name to "client.id" of Admin in testing

2024-04-26 Thread PoAn Yang (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841193#comment-17841193
 ] 

PoAn Yang commented on KAFKA-16611:
---

Hi [~chia7712], I'm interested in this issue. May I take it? Thank you.

> Consider adding test name to "client.id" of Admin in testing
> 
>
> Key: KAFKA-16611
> URL: https://issues.apache.org/jira/browse/KAFKA-16611
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Minor
>
> I observed following errors many times.
> {quote}
> org.opentest4j.AssertionFailedError: Found 16 unexpected threads during 
> @BeforeAll: `kafka-admin-client-thread | 
> adminclient-287,kafka-admin-client-thread | 
> adminclient-276,kafka-admin-client-thread | 
> adminclient-271,kafka-admin-client-thread | 
> adminclient-293,kafka-admin-client-thread | 
> adminclient-281,kafka-admin-client-thread | 
> adminclient-302,kafka-admin-client-thread | 
> adminclient-334,kafka-admin-client-thread | 
> adminclient-323,kafka-admin-client-thread | 
> adminclient-257,kafka-admin-client-thread | 
> adminclient-336,kafka-admin-client-thread | 
> adminclient-308,kafka-admin-client-thread | 
> adminclient-263,kafka-admin-client-thread | 
> adminclient-273,kafka-admin-client-thread | 
> adminclient-278,kafka-admin-client-thread | 
> adminclient-283,kafka-admin-client-thread | adminclient-317` ==> expected: 
>  but was: 
> {quote}
> That could be caused by exceptional shutdown. Or we do have resource leaks in 
> some failed tests. Adding the test name to "client.id" can give hints about 
> that



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (KAFKA-16622) Mirromaker2 first Checkpoint not emitted until consumer group fully catches up once

2024-04-26 Thread Edoardo Comar (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841137#comment-17841137
 ] 

Edoardo Comar edited comment on KAFKA-16622 at 4/26/24 10:40 AM:
-

if after the consumer reaches 1 and the 1st checkpoint is emitted, MM2 
restarts before the other 1 messages are produced,
then bug https://issues.apache.org/jira/browse/KAFKA-15905 hits and we end up 
with just two checkpoints, at 1 and 2.

 

but the problem here is that if the consumer never fully cathces up once, we 
will never have a checkpoint


was (Author: ecomar):
if after the consumer reaches 1 and the 1st checkpoint is emitted, MM2 
restarts before the other 1 messages are produced,
then bug https://issues.apache.org/jira/browse/KAFKA-15905 hits and we end up 
with just two checkpoints, at 1 and 2.


> Mirromaker2 first Checkpoint not emitted until consumer group fully catches 
> up once
> ---
>
> Key: KAFKA-16622
> URL: https://issues.apache.org/jira/browse/KAFKA-16622
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 3.7.0, 3.6.2, 3.8.0
>Reporter: Edoardo Comar
>Priority: Major
> Attachments: connect.log.2024-04-26-10.zip, 
> edo-connect-mirror-maker-sourcetarget.properties
>
>
> We observed an excessively delayed emission of the MM2 Checkpoint record.
> It only gets created when the source consumer reaches the end of a topic. 
> This does not seem reasonable.
> In a very simple setup :
> Tested with a standalone single process MirrorMaker2 mirroring between two 
> single-node kafka clusters(mirromaker config attached) with quick refresh 
> intervals (eg 5 sec) and a small offset.lag.max (eg 10)
> create a single topic in the source cluster
> produce data to it (e.g. 1 records)
> start a slow consumer - e.g. fetching 50records/poll and pausing 1 sec 
> between polls which commits after each poll
> watch the Checkpoint topic in the target cluster
> bin/kafka-console-consumer.sh --bootstrap-server localhost:9192 \
>   --topic source.checkpoints.internal \
>   --formatter org.apache.kafka.connect.mirror.formatters.CheckpointFormatter \
>--from-beginning
> -> no record appears in the checkpoint topic until the consumer reaches the 
> end of the topic (ie its consumer group lag gets down to 0).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16622) Mirromaker2 first Checkpoint not emitted until consumer group fully catches up once

2024-04-26 Thread Edoardo Comar (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841137#comment-17841137
 ] 

Edoardo Comar commented on KAFKA-16622:
---

if after the consumer reaches 1 and the 1st checkpoint is emitted, MM2 
restarts before the other 1 messages are produced,
then bug https://issues.apache.org/jira/browse/KAFKA-15905 hits and we end up 
with just two checkpoints, at 1 and 2.


> Mirromaker2 first Checkpoint not emitted until consumer group fully catches 
> up once
> ---
>
> Key: KAFKA-16622
> URL: https://issues.apache.org/jira/browse/KAFKA-16622
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 3.7.0, 3.6.2, 3.8.0
>Reporter: Edoardo Comar
>Priority: Major
> Attachments: connect.log.2024-04-26-10.zip, 
> edo-connect-mirror-maker-sourcetarget.properties
>
>
> We observed an excessively delayed emission of the MM2 Checkpoint record.
> It only gets created when the source consumer reaches the end of a topic. 
> This does not seem reasonable.
> In a very simple setup :
> Tested with a standalone single process MirrorMaker2 mirroring between two 
> single-node kafka clusters(mirromaker config attached) with quick refresh 
> intervals (eg 5 sec) and a small offset.lag.max (eg 10)
> create a single topic in the source cluster
> produce data to it (e.g. 1 records)
> start a slow consumer - e.g. fetching 50records/poll and pausing 1 sec 
> between polls which commits after each poll
> watch the Checkpoint topic in the target cluster
> bin/kafka-console-consumer.sh --bootstrap-server localhost:9192 \
>   --topic source.checkpoints.internal \
>   --formatter org.apache.kafka.connect.mirror.formatters.CheckpointFormatter \
>--from-beginning
> -> no record appears in the checkpoint topic until the consumer reaches the 
> end of the topic (ie its consumer group lag gets down to 0).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (KAFKA-16622) Mirromaker2 first Checkpoint not emitted until consumer group fully catches up once

2024-04-26 Thread Edoardo Comar (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841134#comment-17841134
 ] 

Edoardo Comar edited comment on KAFKA-16622 at 4/26/24 10:18 AM:
-

Hi [~gharris1727]
The task did not restart. Here are the trace logs generated in the following 
scenario.
clean start with the properties file posted above.
create 'mytopic' 1 topic with 1 partition
produce 10,000 messages to source cluster
start a single consumer in source cluster that in a loop
 - polls 100 messages
 - commitSync
 - waits 1 sec
e.g.

{{ }}

{{{color:#00}clientProps{color}.put({color:#00}ConsumerConfig{color}.{color:#871094}MAX_POLL_RECORDS_CONFIG{color},
 {color:#067d17}"100"{color});}}

{{{color:#0033b3}...
while {color}({color:#00}System{color}.currentTimeMillis() - 
{color:#00}now {color}< 
{color:#1750eb}1200{color}*{color:#1750eb}1000L{color}) {}}
{{{color:#00} ConsumerRecords{color}<{color:#00}String{color}, 
{color:#00}String{color}> {color:#00}crs1 {color}= 
{color:#00}consumer{color}.poll({color:#00}Duration{color}.ofMillis({color:#1750eb}1000L{color}));}}
{{{color:#00} polledCount {color}= {color:#00}polledCount {color}+ 
print({color:#00}crs1{color}, {color:#00}consumer{color});}}
{{{color:#00} 
consumer{color}.commitSync({color:#00}Duration{color}.ofSeconds({color:#1750eb}10{color}));}}
{{{color:#00} Thread{color}.sleep({color:#1750eb}1000L{color});}}
{{}}}

the first checkpoint is only emitted when the consumer catches up fully at 
1.
Then other 1 messages are produced quickly and the consumer advances, and 
some checkpoints are emitted
so that overall we have

 
 
{color:#00}Checkpoint{{color}{color:#ff}consumerGroupId{color}{color:#00}=mygroup1,
 {color}{color:#ff}topicPartition{color}{color:#00}=source.mytopic-0, 
{color}{color:#ff}upstreamOffset{color}{color:#00}=1, 
{color}{color:#ff}downstreamOffset{color}{color:#00}=1, 
{color}{color:#ff}metadata{color}{color:#00}=}{color}
{color:#00}Checkpoint{{color}{color:#ff}consumerGroupId{color}{color:#00}=mygroup1,
 {color}{color:#ff}topicPartition{color}{color:#00}=source.mytopic-0, 
{color}{color:#ff}upstreamOffset{color}{color:#00}=16700, 
{color}{color:#ff}downstreamOffset{color}{color:#00}=16501, 
{color}{color:#ff}metadata{color}{color:#00}=}{color}
{color:#00}Checkpoint{{color}{color:#ff}consumerGroupId{color}{color:#00}=mygroup1,
 {color}{color:#ff}topicPartition{color}{color:#00}=source.mytopic-0, 
{color}{color:#ff}upstreamOffset{color}{color:#00}=18200, 
{color}{color:#ff}downstreamOffset{color}{color:#00}=18096, 
{color}{color:#ff}metadata{color}{color:#00}=}{color}
{color:#00}Checkpoint{{color}{color:#ff}consumerGroupId{color}{color:#00}=mygroup1,
 {color}{color:#ff}topicPartition{color}{color:#00}=source.mytopic-0, 
{color}{color:#ff}upstreamOffset{color}{color:#00}=19200, 
{color}{color:#ff}downstreamOffset{color}{color:#00}=18965, 
{color}{color:#ff}metadata{color}{color:#00}=}{color}
{color:#00}Checkpoint{{color}{color:#ff}consumerGroupId{color}{color:#00}=mygroup1,
 {color}{color:#ff}topicPartition{color}{color:#00}=source.mytopic-0, 
{color}{color:#ff}upstreamOffset{color}{color:#00}=19700, 
{color}{color:#ff}downstreamOffset{color}{color:#00}=19636, 
{color}{color:#ff}metadata{color}{color:#00}=}{color}
{color:#00}Checkpoint{{color}{color:#ff}consumerGroupId{color}{color:#00}=mygroup1,
 {color}{color:#ff}topicPartition{color}{color:#00}=source.mytopic-0, 
{color}{color:#ff}upstreamOffset{color}{color:#00}=2, 
{color}{color:#ff}downstreamOffset{color}{color:#00}=2, 
{color}{color:#ff}metadata{color}{color:#00}=}{color}

[^connect.log.2024-04-26-10.zip]


was (Author: ecomar):
Hi [~gharris1727]
The task did not restart. Here are the trace logs generated in the following 
scenario.
clean start with the properties file posted above.
create 'mytopic' 1 topic with 1 partition
produce 10,000 messages to source cluster
start a single consumer in source cluster that in a loop
 - polls 100 messages
 - commitSync
 - waits 1 sec
e.g.

{{ }}

{{{color:#00}clientProps{color}.put({color:#00}ConsumerConfig{color}.{color:#871094}MAX_POLL_RECORDS_CONFIG{color},
 {color:#067d17}"100"{color});}}

{{{color:#0033b3}...
while {color}({color:#00}System{color}.currentTimeMillis() - 
{color:#00}now {color}< 
{color:#1750eb}1200{color}*{color:#1750eb}1000L{color}) {}}
{{{color:#00} ConsumerRecords{color}<{color:#00}String{color}, 
{color:#00}String{color}> {color:#00}crs1 {color}= 
{color:#00}consumer{color}.poll({color:#

[jira] [Comment Edited] (KAFKA-16622) Mirromaker2 first Checkpoint not emitted until consumer group fully catches up once

2024-04-26 Thread Edoardo Comar (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841134#comment-17841134
 ] 

Edoardo Comar edited comment on KAFKA-16622 at 4/26/24 10:17 AM:
-

Hi [~gharris1727]
The task did not restart. Here are the trace logs generated in the following 
scenario.
clean start with the properties file posted above.
create 'mytopic' 1 topic with 1 partition
produce 10,000 messages to source cluster
start a single consumer in source cluster that in a loop
 - polls 100 messages
 - commitSync
 - waits 1 sec
e.g.

{{ }}

{{{color:#0033b3}while {color}({color:#00}System{color}.currentTimeMillis() 
- {color:#00}now {color}< 
{color:#1750eb}1200{color}*{color:#1750eb}1000L{color}) {}}
{{{color:#00}ConsumerRecords{color}<{color:#00}String{color}, 
{color:#00}String{color}> {color:#00}crs1 {color}= 
{color:#00}consumer{color}.poll({color:#00}Duration{color}.ofMillis({color:#1750eb}1000L{color}));}}
{{{color:#00}polledCount {color}= {color:#00}polledCount {color}+ 
print({color:#00}crs1{color}, {color:#00}consumer{color});}}
{{{color:#00}consumer{color}.commitSync({color:#00}Duration{color}.ofSeconds({color:#1750eb}10{color}));}}
{{{color:#00}Thread{color}.sleep({color:#1750eb}1000L{color});}}
{{}}}

the first checkpoint is only emitted when the consumer catches up fully at 
1.
Then other 1 messages are produced quickly and the consumer advances, and 
some checkpoints are emitted
so that overall we have

 
 
{color:#00}Checkpoint{{color}{color:#ff}consumerGroupId{color}{color:#00}=mygroup1,
 {color}{color:#ff}topicPartition{color}{color:#00}=source.mytopic-0, 
{color}{color:#ff}upstreamOffset{color}{color:#00}=1, 
{color}{color:#ff}downstreamOffset{color}{color:#00}=1, 
{color}{color:#ff}metadata{color}{color:#00}=}{color}
{color:#00}Checkpoint{{color}{color:#ff}consumerGroupId{color}{color:#00}=mygroup1,
 {color}{color:#ff}topicPartition{color}{color:#00}=source.mytopic-0, 
{color}{color:#ff}upstreamOffset{color}{color:#00}=16700, 
{color}{color:#ff}downstreamOffset{color}{color:#00}=16501, 
{color}{color:#ff}metadata{color}{color:#00}=}{color}
{color:#00}Checkpoint{{color}{color:#ff}consumerGroupId{color}{color:#00}=mygroup1,
 {color}{color:#ff}topicPartition{color}{color:#00}=source.mytopic-0, 
{color}{color:#ff}upstreamOffset{color}{color:#00}=18200, 
{color}{color:#ff}downstreamOffset{color}{color:#00}=18096, 
{color}{color:#ff}metadata{color}{color:#00}=}{color}
{color:#00}Checkpoint{{color}{color:#ff}consumerGroupId{color}{color:#00}=mygroup1,
 {color}{color:#ff}topicPartition{color}{color:#00}=source.mytopic-0, 
{color}{color:#ff}upstreamOffset{color}{color:#00}=19200, 
{color}{color:#ff}downstreamOffset{color}{color:#00}=18965, 
{color}{color:#ff}metadata{color}{color:#00}=}{color}
{color:#00}Checkpoint{{color}{color:#ff}consumerGroupId{color}{color:#00}=mygroup1,
 {color}{color:#ff}topicPartition{color}{color:#00}=source.mytopic-0, 
{color}{color:#ff}upstreamOffset{color}{color:#00}=19700, 
{color}{color:#ff}downstreamOffset{color}{color:#00}=19636, 
{color}{color:#ff}metadata{color}{color:#00}=}{color}
{color:#00}Checkpoint{{color}{color:#ff}consumerGroupId{color}{color:#00}=mygroup1,
 {color}{color:#ff}topicPartition{color}{color:#00}=source.mytopic-0, 
{color}{color:#ff}upstreamOffset{color}{color:#00}=2, 
{color}{color:#ff}downstreamOffset{color}{color:#00}=2, 
{color}{color:#ff}metadata{color}{color:#00}=}{color}

[^connect.log.2024-04-26-10.zip]{}}}


was (Author: ecomar):
Hi [~gharris1727]
The task did not restart. Here are the trace logs generated in the following 
scenario.
clean start with the properties file posted above.
create 'mytopic' 1 topic with 1 partition
produce 10,000 messages to source cluster
start a single consumer in source cluster that in a loop
 - polls 100 messages
 - commitSync
 - waits 1 sec
e.g.

{{ }}
{{}}

{{{color:#0033b3}while {color}({color:#00}System{color}.currentTimeMillis() 
- {color:#00}now {color}< 
{color:#1750eb}1200{color}*{color:#1750eb}1000L{color}) {}}
{{{color:#00}ConsumerRecords{color}<{color:#00}String{color}, 
{color:#00}String{color}> {color:#00}crs1 {color}= 
{color:#00}consumer{color}.poll({color:#00}Duration{color}.ofMillis({color:#1750eb}1000L{color}));}}
{{{color:#00}polledCount {color}= {color:#00}polledCount {color}+ 
print({color:#00}crs1{color}, {color:#00}consumer{color});}}
{{{color:#00}consumer{color}.commitSync({color:#00}Duration{color}.ofSeconds({color:#1750eb}10{color}));}}
{{{color:#00}T

[jira] [Comment Edited] (KAFKA-16622) Mirromaker2 first Checkpoint not emitted until consumer group fully catches up once

2024-04-26 Thread Edoardo Comar (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841134#comment-17841134
 ] 

Edoardo Comar edited comment on KAFKA-16622 at 4/26/24 10:17 AM:
-

Hi [~gharris1727]
The task did not restart. Here are the trace logs generated in the following 
scenario.
clean start with the properties file posted above.
create 'mytopic' 1 topic with 1 partition
produce 10,000 messages to source cluster
start a single consumer in source cluster that in a loop
 - polls 100 messages
 - commitSync
 - waits 1 sec
e.g.

{{ }}

{{{color:#00}clientProps{color}.put({color:#00}ConsumerConfig{color}.{color:#871094}MAX_POLL_RECORDS_CONFIG{color},
 {color:#067d17}"100"{color});}}

{{{color:#0033b3}...
while {color}({color:#00}System{color}.currentTimeMillis() - 
{color:#00}now {color}< 
{color:#1750eb}1200{color}*{color:#1750eb}1000L{color}) {}}
{{{color:#00} ConsumerRecords{color}<{color:#00}String{color}, 
{color:#00}String{color}> {color:#00}crs1 {color}= 
{color:#00}consumer{color}.poll({color:#00}Duration{color}.ofMillis({color:#1750eb}1000L{color}));}}
{{{color:#00} polledCount {color}= {color:#00}polledCount {color}+ 
print({color:#00}crs1{color}, {color:#00}consumer{color});}}
{{{color:#00} 
consumer{color}.commitSync({color:#00}Duration{color}.ofSeconds({color:#1750eb}10{color}));}}
{{{color:#00} Thread{color}.sleep({color:#1750eb}1000L{color});}}
{{}}}

the first checkpoint is only emitted when the consumer catches up fully at 
1.
Then other 1 messages are produced quickly and the consumer advances, and 
some checkpoints are emitted
so that overall we have

 
 
{color:#00}Checkpoint{{color}{color:#ff}consumerGroupId{color}{color:#00}=mygroup1,
 {color}{color:#ff}topicPartition{color}{color:#00}=source.mytopic-0, 
{color}{color:#ff}upstreamOffset{color}{color:#00}=1, 
{color}{color:#ff}downstreamOffset{color}{color:#00}=1, 
{color}{color:#ff}metadata{color}{color:#00}=}{color}
{color:#00}Checkpoint{{color}{color:#ff}consumerGroupId{color}{color:#00}=mygroup1,
 {color}{color:#ff}topicPartition{color}{color:#00}=source.mytopic-0, 
{color}{color:#ff}upstreamOffset{color}{color:#00}=16700, 
{color}{color:#ff}downstreamOffset{color}{color:#00}=16501, 
{color}{color:#ff}metadata{color}{color:#00}=}{color}
{color:#00}Checkpoint{{color}{color:#ff}consumerGroupId{color}{color:#00}=mygroup1,
 {color}{color:#ff}topicPartition{color}{color:#00}=source.mytopic-0, 
{color}{color:#ff}upstreamOffset{color}{color:#00}=18200, 
{color}{color:#ff}downstreamOffset{color}{color:#00}=18096, 
{color}{color:#ff}metadata{color}{color:#00}=}{color}
{color:#00}Checkpoint{{color}{color:#ff}consumerGroupId{color}{color:#00}=mygroup1,
 {color}{color:#ff}topicPartition{color}{color:#00}=source.mytopic-0, 
{color}{color:#ff}upstreamOffset{color}{color:#00}=19200, 
{color}{color:#ff}downstreamOffset{color}{color:#00}=18965, 
{color}{color:#ff}metadata{color}{color:#00}=}{color}
{color:#00}Checkpoint{{color}{color:#ff}consumerGroupId{color}{color:#00}=mygroup1,
 {color}{color:#ff}topicPartition{color}{color:#00}=source.mytopic-0, 
{color}{color:#ff}upstreamOffset{color}{color:#00}=19700, 
{color}{color:#ff}downstreamOffset{color}{color:#00}=19636, 
{color}{color:#ff}metadata{color}{color:#00}=}{color}
{color:#00}Checkpoint{{color}{color:#ff}consumerGroupId{color}{color:#00}=mygroup1,
 {color}{color:#ff}topicPartition{color}{color:#00}=source.mytopic-0, 
{color}{color:#ff}upstreamOffset{color}{color:#00}=2, 
{color}{color:#ff}downstreamOffset{color}{color:#00}=2, 
{color}{color:#ff}metadata{color}{color:#00}=}{color}

[^connect.log.2024-04-26-10.zip]{}}}


was (Author: ecomar):
Hi [~gharris1727]
The task did not restart. Here are the trace logs generated in the following 
scenario.
clean start with the properties file posted above.
create 'mytopic' 1 topic with 1 partition
produce 10,000 messages to source cluster
start a single consumer in source cluster that in a loop
 - polls 100 messages
 - commitSync
 - waits 1 sec
e.g.

{{ }}

{{{color:#0033b3}while {color}({color:#00}System{color}.currentTimeMillis() 
- {color:#00}now {color}< 
{color:#1750eb}1200{color}*{color:#1750eb}1000L{color}) {}}
{{{color:#00}ConsumerRecords{color}<{color:#00}String{color}, 
{color:#00}String{color}> {color:#00}crs1 {color}= 
{color:#00}consumer{color}.poll({color:#00}Duration{color}.ofMillis({color:#1750eb}1000L{color}));}}
{{{color:#00}polledCount {color}= {color:#00}polledCount {color}+ 
print({color:#00}crs1{col

[jira] [Comment Edited] (KAFKA-16622) Mirromaker2 first Checkpoint not emitted until consumer group fully catches up once

2024-04-26 Thread Edoardo Comar (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841134#comment-17841134
 ] 

Edoardo Comar edited comment on KAFKA-16622 at 4/26/24 10:16 AM:
-

Hi [~gharris1727]
The task did not restart. Here are the trace logs generated in the following 
scenario.
clean start with the properties file posted above.
create 'mytopic' 1 topic with 1 partition
produce 10,000 messages to source cluster
start a single consumer in source cluster that in a loop
 - polls 100 messages
 - commitSync
 - waits 1 sec
e.g.

{{ }}
{{}}

{{{color:#0033b3}while {color}({color:#00}System{color}.currentTimeMillis() 
- {color:#00}now {color}< 
{color:#1750eb}1200{color}*{color:#1750eb}1000L{color}) {}}
{{{color:#00}ConsumerRecords{color}<{color:#00}String{color}, 
{color:#00}String{color}> {color:#00}crs1 {color}= 
{color:#00}consumer{color}.poll({color:#00}Duration{color}.ofMillis({color:#1750eb}1000L{color}));}}
{{{color:#00}polledCount {color}= {color:#00}polledCount {color}+ 
print({color:#00}crs1{color}, {color:#00}consumer{color});}}
{{{color:#00}consumer{color}.commitSync({color:#00}Duration{color}.ofSeconds({color:#1750eb}10{color}));}}
{{{color:#00}Thread{color}.sleep({color:#1750eb}1000L{color});}}
{{}}}

the first checkpoint is only emitted when the consumer catches up fully at 
1.
Then other 1 messages are produced quickly and the consumer advances, and 
some checkpoints are emitted
so that overall we have

 
 
{color:#00}Checkpoint{{color}{color:#ff}consumerGroupId{color}{color:#00}=mygroup1,
 {color}{color:#ff}topicPartition{color}{color:#00}=source.mytopic-0, 
{color}{color:#ff}upstreamOffset{color}{color:#00}=1, 
{color}{color:#ff}downstreamOffset{color}{color:#00}=1, 
{color}{color:#ff}metadata{color}{color:#00}=}{color}
{color:#00}Checkpoint{{color}{color:#ff}consumerGroupId{color}{color:#00}=mygroup1,
 {color}{color:#ff}topicPartition{color}{color:#00}=source.mytopic-0, 
{color}{color:#ff}upstreamOffset{color}{color:#00}=16700, 
{color}{color:#ff}downstreamOffset{color}{color:#00}=16501, 
{color}{color:#ff}metadata{color}{color:#00}=}{color}
{color:#00}Checkpoint{{color}{color:#ff}consumerGroupId{color}{color:#00}=mygroup1,
 {color}{color:#ff}topicPartition{color}{color:#00}=source.mytopic-0, 
{color}{color:#ff}upstreamOffset{color}{color:#00}=18200, 
{color}{color:#ff}downstreamOffset{color}{color:#00}=18096, 
{color}{color:#ff}metadata{color}{color:#00}=}{color}
{color:#00}Checkpoint{{color}{color:#ff}consumerGroupId{color}{color:#00}=mygroup1,
 {color}{color:#ff}topicPartition{color}{color:#00}=source.mytopic-0, 
{color}{color:#ff}upstreamOffset{color}{color:#00}=19200, 
{color}{color:#ff}downstreamOffset{color}{color:#00}=18965, 
{color}{color:#ff}metadata{color}{color:#00}=}{color}
{color:#00}Checkpoint{{color}{color:#ff}consumerGroupId{color}{color:#00}=mygroup1,
 {color}{color:#ff}topicPartition{color}{color:#00}=source.mytopic-0, 
{color}{color:#ff}upstreamOffset{color}{color:#00}=19700, 
{color}{color:#ff}downstreamOffset{color}{color:#00}=19636, 
{color}{color:#ff}metadata{color}{color:#00}=}{color}
{color:#00}Checkpoint{{color}{color:#ff}consumerGroupId{color}{color:#00}=mygroup1,
 {color}{color:#ff}topicPartition{color}{color:#00}=source.mytopic-0, 
{color}{color:#ff}upstreamOffset{color}{color:#00}=2, 
{color}{color:#ff}downstreamOffset{color}{color:#00}=2, 
{color}{color:#ff}metadata{color}{color:#00}=}{color}

[^connect.log.2024-04-26-10.zip]{}}}


was (Author: ecomar):
Hi [~gharris1727]
The task did not restart. Here are the trace logs generated in the following 
scenario.
clean start with the properties file posted above.
create 'mytopic' 1 topic with 1 partition
produce 10,000 messages to source cluster
start a single consumer in source cluster that in a loop
 - polls 100 messages
 - commitSync
 - waits 1 sec
e.g.

{{ }}
{{{}while (...){}}}{{{}{ {}}}

{{ConsumerRecords crs1 = 
consumer.poll(Duration.ofMillis(1000L)); }}

{{// print(crs1, consumer); print last record of polled 
consumer.commitSync(Duration.ofSeconds(10)); }}

{{Thread.sleep(1000L); }}

{{}}}

the first checkpoint is only emitted when the consumer catches up fully at 
1.
Then other 1 messages are produced quickly and the consumer advances, and 
some checkpoints are emitted
so that overall we have


{{{}Checkpoint{}}}{{{}{consumerGroupId=mygroup1, 
topicPartition=source.mytopic-0, upstreamOffset=1, downstreamOffset=1, 
metadata=}{}}}{{{}Checkpoint{}}}{{{}{consumerGroupId=mygroup1, 
topicPartition=source.mytopic-0, upstreamOff

[jira] [Comment Edited] (KAFKA-16622) Mirromaker2 first Checkpoint not emitted until consumer group fully catches up once

2024-04-26 Thread Edoardo Comar (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841134#comment-17841134
 ] 

Edoardo Comar edited comment on KAFKA-16622 at 4/26/24 10:15 AM:
-

Hi [~gharris1727]
The task did not restart. Here are the trace logs generated in the following 
scenario.
clean start with the properties file posted above.
create 'mytopic' 1 topic with 1 partition
produce 10,000 messages to source cluster
start a single consumer in source cluster that in a loop
 - polls 100 messages
 - commitSync
 - waits 1 sec
e.g.

{{ }}
{{{}while (...){}}}{{{}{ {}}}

{{ConsumerRecords crs1 = 
consumer.poll(Duration.ofMillis(1000L)); }}

{{// print(crs1, consumer); print last record of polled 
consumer.commitSync(Duration.ofSeconds(10)); }}

{{Thread.sleep(1000L); }}

{{}}}

the first checkpoint is only emitted when the consumer catches up fully at 
1.
Then other 1 messages are produced quickly and the consumer advances, and 
some checkpoints are emitted
so that overall we have


{{{}Checkpoint{}}}{{{}{consumerGroupId=mygroup1, 
topicPartition=source.mytopic-0, upstreamOffset=1, downstreamOffset=1, 
metadata=}{}}}{{{}Checkpoint{}}}{{{}{consumerGroupId=mygroup1, 
topicPartition=source.mytopic-0, upstreamOffset=16700, downstreamOffset=16501, 
metadata=}{}}}{{{}Checkpoint{}}}{{{}{consumerGroupId=mygroup1, 
topicPartition=source.mytopic-0, upstreamOffset=18200, downstreamOffset=18096, 
metadata=}{}}}{{{}Checkpoint{}}}{{{}{consumerGroupId=mygroup1, 
topicPartition=source.mytopic-0, upstreamOffset=19200, downstreamOffset=18965, 
metadata=}{}}}{{{}Checkpoint{}}}{{{}{consumerGroupId=mygroup1, 
topicPartition=source.mytopic-0, upstreamOffset=19700, downstreamOffset=19636, 
metadata=}{}}}{{{}Checkpoint{}}}{{{}{consumerGroupId=mygroup1, 
topicPartition=source.mytopic-0, upstreamOffset=2, downstreamOffset=2, 
metadata=}{}}}{{{}{}}}{{{}
[^connect.log.2024-04-26-10.zip]{}}}


was (Author: ecomar):
Hi [~gharris1727]
The task did not restart. Here are the trace logs generated in the following 
scenario.
clean start with the properties file posted above.
create 'mytopic' 1 topic with 1 partition
produce 10,000 messages to source cluster
start a single consumer in source cluster that in a loop
- polls 100 messages
- commitSync
- waits 1 sec
e.g.

 {{   
while (...) {
ConsumerRecords crs1 = 
consumer.poll(Duration.ofMillis(1000L));
   // print(crs1, consumer); print last record of polled
consumer.commitSync(Duration.ofSeconds(10));
Thread.sleep(1000L);
  }
}}

the first checkpoint is only emitted when the consumer catches up fully at 
1.
Then other 1 messages are produced quickly and the consumer advances, and 
some checkpoints are emitted
so that overall we have

{{
Checkpoint{consumerGroupId=mygroup1, topicPartition=source.mytopic-0, 
upstreamOffset=1, downstreamOffset=1, metadata=}
Checkpoint{consumerGroupId=mygroup1, topicPartition=source.mytopic-0, 
upstreamOffset=16700, downstreamOffset=16501, metadata=}
Checkpoint{consumerGroupId=mygroup1, topicPartition=source.mytopic-0, 
upstreamOffset=18200, downstreamOffset=18096, metadata=}
Checkpoint{consumerGroupId=mygroup1, topicPartition=source.mytopic-0, 
upstreamOffset=19200, downstreamOffset=18965, metadata=}
Checkpoint{consumerGroupId=mygroup1, topicPartition=source.mytopic-0, 
upstreamOffset=19700, downstreamOffset=19636, metadata=}
Checkpoint{consumerGroupId=mygroup1, topicPartition=source.mytopic-0, 
upstreamOffset=2, downstreamOffset=2, metadata=}
}}

 [^connect.log.2024-04-26-10.zip] 

> Mirromaker2 first Checkpoint not emitted until consumer group fully catches 
> up once
> ---
>
> Key: KAFKA-16622
> URL: https://issues.apache.org/jira/browse/KAFKA-16622
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 3.7.0, 3.6.2, 3.8.0
>Reporter: Edoardo Comar
>Priority: Major
> Attachments: connect.log.2024-04-26-10.zip, 
> edo-connect-mirror-maker-sourcetarget.properties
>
>
> We observed an excessively delayed emission of the MM2 Checkpoint record.
> It only gets created when the source consumer reaches the end of a topic. 
> This does not seem reasonable.
> In a very simple setup :
> Tested with a standalone single process MirrorMaker2 mirroring between two 
> single-node kafka clusters(mirromaker config attached) with quick refresh 
> intervals (eg 5 sec) and a small offset.lag.max (eg 10)
> create a single topic in the source cluster
> produce data to it (e.g. 1 records)
> start a slow consumer - e.g. fetching 50records/poll and pausing 1 s

[jira] [Comment Edited] (KAFKA-16622) Mirromaker2 first Checkpoint not emitted until consumer group fully catches up once

2024-04-26 Thread Edoardo Comar (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841134#comment-17841134
 ] 

Edoardo Comar edited comment on KAFKA-16622 at 4/26/24 10:13 AM:
-

Hi [~gharris1727]
The task did not restart. Here are the trace logs generated in the following 
scenario.
clean start with the properties file posted above.
create 'mytopic' 1 topic with 1 partition
produce 10,000 messages to source cluster
start a single consumer in source cluster that in a loop
- polls 100 messages
- commitSync
- waits 1 sec
e.g.

 {{   
while (...) {
ConsumerRecords crs1 = 
consumer.poll(Duration.ofMillis(1000L));
   // print(crs1, consumer); print last record of polled
consumer.commitSync(Duration.ofSeconds(10));
Thread.sleep(1000L);
  }
}}

the first checkpoint is only emitted when the consumer catches up fully at 
1.
Then other 1 messages are produced quickly and the consumer advances, and 
some checkpoints are emitted
so that overall we have

{{
Checkpoint{consumerGroupId=mygroup1, topicPartition=source.mytopic-0, 
upstreamOffset=1, downstreamOffset=1, metadata=}
Checkpoint{consumerGroupId=mygroup1, topicPartition=source.mytopic-0, 
upstreamOffset=16700, downstreamOffset=16501, metadata=}
Checkpoint{consumerGroupId=mygroup1, topicPartition=source.mytopic-0, 
upstreamOffset=18200, downstreamOffset=18096, metadata=}
Checkpoint{consumerGroupId=mygroup1, topicPartition=source.mytopic-0, 
upstreamOffset=19200, downstreamOffset=18965, metadata=}
Checkpoint{consumerGroupId=mygroup1, topicPartition=source.mytopic-0, 
upstreamOffset=19700, downstreamOffset=19636, metadata=}
Checkpoint{consumerGroupId=mygroup1, topicPartition=source.mytopic-0, 
upstreamOffset=2, downstreamOffset=2, metadata=}
}}

 [^connect.log.2024-04-26-10.zip] 


was (Author: ecomar):
Hi [~gharris1727]
The task did not restart. Here are the trace logs generated in the following 
scenario.
clean start with the properties file posted above.
create 'mytopic' 1 topic with 1 partition
produce 10,000 messages to source cluster
start a single consumer in source cluster that in a loop
- polls 100 messages
- commitSync
- waits 1 sec
e.g.
```
 {{   while (...) {
ConsumerRecords crs1 = 
consumer.poll(Duration.ofMillis(1000L));
   // print(crs1, consumer); print last record of polled
consumer.commitSync(Duration.ofSeconds(10));
Thread.sleep(1000L);
  }
}}```

the first checkpoint is only emitted when the consumer catches up fully at 
1.
Then other 1 messages are produced quickly and the consumer advances, and 
some checkpoints are emitted
so that overall we have
```
{{Checkpoint{consumerGroupId=mygroup1, topicPartition=source.mytopic-0, 
upstreamOffset=1, downstreamOffset=1, metadata=}
Checkpoint{consumerGroupId=mygroup1, topicPartition=source.mytopic-0, 
upstreamOffset=16700, downstreamOffset=16501, metadata=}
Checkpoint{consumerGroupId=mygroup1, topicPartition=source.mytopic-0, 
upstreamOffset=18200, downstreamOffset=18096, metadata=}
Checkpoint{consumerGroupId=mygroup1, topicPartition=source.mytopic-0, 
upstreamOffset=19200, downstreamOffset=18965, metadata=}
Checkpoint{consumerGroupId=mygroup1, topicPartition=source.mytopic-0, 
upstreamOffset=19700, downstreamOffset=19636, metadata=}
Checkpoint{consumerGroupId=mygroup1, topicPartition=source.mytopic-0, 
upstreamOffset=2, downstreamOffset=2, metadata=}}}
```
 [^connect.log.2024-04-26-10.zip] 

> Mirromaker2 first Checkpoint not emitted until consumer group fully catches 
> up once
> ---
>
> Key: KAFKA-16622
> URL: https://issues.apache.org/jira/browse/KAFKA-16622
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 3.7.0, 3.6.2, 3.8.0
>Reporter: Edoardo Comar
>Priority: Major
> Attachments: connect.log.2024-04-26-10.zip, 
> edo-connect-mirror-maker-sourcetarget.properties
>
>
> We observed an excessively delayed emission of the MM2 Checkpoint record.
> It only gets created when the source consumer reaches the end of a topic. 
> This does not seem reasonable.
> In a very simple setup :
> Tested with a standalone single process MirrorMaker2 mirroring between two 
> single-node kafka clusters(mirromaker config attached) with quick refresh 
> intervals (eg 5 sec) and a small offset.lag.max (eg 10)
> create a single topic in the source cluster
> produce data to it (e.g. 1 records)
> start a slow consumer - e.g. fetching 50records/poll and pausing 1 sec 
> between polls which commits after each poll
&

[jira] [Comment Edited] (KAFKA-16622) Mirromaker2 first Checkpoint not emitted until consumer group fully catches up once

2024-04-26 Thread Edoardo Comar (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841134#comment-17841134
 ] 

Edoardo Comar edited comment on KAFKA-16622 at 4/26/24 10:12 AM:
-

Hi [~gharris1727]
The task did not restart. Here are the trace logs generated in the following 
scenario.
clean start with the properties file posted above.
create 'mytopic' 1 topic with 1 partition
produce 10,000 messages to source cluster
start a single consumer in source cluster that in a loop
- polls 100 messages
- commitSync
- waits 1 sec
e.g.
```
 {{   while (...) {
ConsumerRecords crs1 = 
consumer.poll(Duration.ofMillis(1000L));
   // print(crs1, consumer); print last record of polled
consumer.commitSync(Duration.ofSeconds(10));
Thread.sleep(1000L);
  }
}}```

the first checkpoint is only emitted when the consumer catches up fully at 
1.
Then other 1 messages are produced quickly and the consumer advances, and 
some checkpoints are emitted
so that overall we have
```
{{Checkpoint{consumerGroupId=mygroup1, topicPartition=source.mytopic-0, 
upstreamOffset=1, downstreamOffset=1, metadata=}
Checkpoint{consumerGroupId=mygroup1, topicPartition=source.mytopic-0, 
upstreamOffset=16700, downstreamOffset=16501, metadata=}
Checkpoint{consumerGroupId=mygroup1, topicPartition=source.mytopic-0, 
upstreamOffset=18200, downstreamOffset=18096, metadata=}
Checkpoint{consumerGroupId=mygroup1, topicPartition=source.mytopic-0, 
upstreamOffset=19200, downstreamOffset=18965, metadata=}
Checkpoint{consumerGroupId=mygroup1, topicPartition=source.mytopic-0, 
upstreamOffset=19700, downstreamOffset=19636, metadata=}
Checkpoint{consumerGroupId=mygroup1, topicPartition=source.mytopic-0, 
upstreamOffset=2, downstreamOffset=2, metadata=}}}
```
 [^connect.log.2024-04-26-10.zip] 


was (Author: ecomar):
Hi [~gharris1727]
The task did not restart. Here are the trace logs generated in the following 
scenario.
clean start with the properties file posted above.
create 'mytopic' 1 topic with 1 partition
produce 10,000 messages to source cluster
start a single consumer in source cluster that in a loop
- polls 100 messages
- commitSync
- waits 1 sec
e.g.
```
while (...) {
ConsumerRecords crs1 = 
consumer.poll(Duration.ofMillis(1000L));
   // print(crs1, consumer); print last record of polled
consumer.commitSync(Duration.ofSeconds(10));
Thread.sleep(1000L);
  }
```

the first checkpoint is only emitted when the consumer catches up fully at 
1.
Then other 1 messages are produced quickly and the consumer advances, and 
some checkpoints are emitted
so that overall we have
```
Checkpoint{consumerGroupId=mygroup1, topicPartition=source.mytopic-0, 
upstreamOffset=1, downstreamOffset=1, metadata=}
Checkpoint{consumerGroupId=mygroup1, topicPartition=source.mytopic-0, 
upstreamOffset=16700, downstreamOffset=16501, metadata=}
Checkpoint{consumerGroupId=mygroup1, topicPartition=source.mytopic-0, 
upstreamOffset=18200, downstreamOffset=18096, metadata=}
Checkpoint{consumerGroupId=mygroup1, topicPartition=source.mytopic-0, 
upstreamOffset=19200, downstreamOffset=18965, metadata=}
Checkpoint{consumerGroupId=mygroup1, topicPartition=source.mytopic-0, 
upstreamOffset=19700, downstreamOffset=19636, metadata=}
Checkpoint{consumerGroupId=mygroup1, topicPartition=source.mytopic-0, 
upstreamOffset=2, downstreamOffset=2, metadata=}
```
 [^connect.log.2024-04-26-10.zip] 

> Mirromaker2 first Checkpoint not emitted until consumer group fully catches 
> up once
> ---
>
> Key: KAFKA-16622
> URL: https://issues.apache.org/jira/browse/KAFKA-16622
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 3.7.0, 3.6.2, 3.8.0
>Reporter: Edoardo Comar
>Priority: Major
> Attachments: connect.log.2024-04-26-10.zip, 
> edo-connect-mirror-maker-sourcetarget.properties
>
>
> We observed an excessively delayed emission of the MM2 Checkpoint record.
> It only gets created when the source consumer reaches the end of a topic. 
> This does not seem reasonable.
> In a very simple setup :
> Tested with a standalone single process MirrorMaker2 mirroring between two 
> single-node kafka clusters(mirromaker config attached) with quick refresh 
> intervals (eg 5 sec) and a small offset.lag.max (eg 10)
> create a single topic in the source cluster
> produce data to it (e.g. 1 records)
> start a slow consumer - e.g. fetching 50records/poll and pausing 1 sec 
> between polls which commits after each poll
&

[jira] [Commented] (KAFKA-16622) Mirromaker2 first Checkpoint not emitted until consumer group fully catches up once

2024-04-26 Thread Edoardo Comar (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841134#comment-17841134
 ] 

Edoardo Comar commented on KAFKA-16622:
---

Hi [~gharris1727]
The task did not restart. Here are the trace logs generated in the following 
scenario.
clean start with the properties file posted above.
create 'mytopic' 1 topic with 1 partition
produce 10,000 messages to source cluster
start a single consumer in source cluster that in a loop
- polls 100 messages
- commitSync
- waits 1 sec
e.g.
```
while (...) {
ConsumerRecords crs1 = 
consumer.poll(Duration.ofMillis(1000L));
   // print(crs1, consumer); print last record of polled
consumer.commitSync(Duration.ofSeconds(10));
Thread.sleep(1000L);
  }
```

the first checkpoint is only emitted when the consumer catches up fully at 
1.
Then other 1 messages are produced quickly and the consumer advances, and 
some checkpoints are emitted
so that overall we have
```
Checkpoint{consumerGroupId=mygroup1, topicPartition=source.mytopic-0, 
upstreamOffset=1, downstreamOffset=1, metadata=}
Checkpoint{consumerGroupId=mygroup1, topicPartition=source.mytopic-0, 
upstreamOffset=16700, downstreamOffset=16501, metadata=}
Checkpoint{consumerGroupId=mygroup1, topicPartition=source.mytopic-0, 
upstreamOffset=18200, downstreamOffset=18096, metadata=}
Checkpoint{consumerGroupId=mygroup1, topicPartition=source.mytopic-0, 
upstreamOffset=19200, downstreamOffset=18965, metadata=}
Checkpoint{consumerGroupId=mygroup1, topicPartition=source.mytopic-0, 
upstreamOffset=19700, downstreamOffset=19636, metadata=}
Checkpoint{consumerGroupId=mygroup1, topicPartition=source.mytopic-0, 
upstreamOffset=2, downstreamOffset=2, metadata=}
```
 [^connect.log.2024-04-26-10.zip] 

> Mirromaker2 first Checkpoint not emitted until consumer group fully catches 
> up once
> ---
>
> Key: KAFKA-16622
> URL: https://issues.apache.org/jira/browse/KAFKA-16622
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 3.7.0, 3.6.2, 3.8.0
>Reporter: Edoardo Comar
>Priority: Major
> Attachments: connect.log.2024-04-26-10.zip, 
> edo-connect-mirror-maker-sourcetarget.properties
>
>
> We observed an excessively delayed emission of the MM2 Checkpoint record.
> It only gets created when the source consumer reaches the end of a topic. 
> This does not seem reasonable.
> In a very simple setup :
> Tested with a standalone single process MirrorMaker2 mirroring between two 
> single-node kafka clusters(mirromaker config attached) with quick refresh 
> intervals (eg 5 sec) and a small offset.lag.max (eg 10)
> create a single topic in the source cluster
> produce data to it (e.g. 1 records)
> start a slow consumer - e.g. fetching 50records/poll and pausing 1 sec 
> between polls which commits after each poll
> watch the Checkpoint topic in the target cluster
> bin/kafka-console-consumer.sh --bootstrap-server localhost:9192 \
>   --topic source.checkpoints.internal \
>   --formatter org.apache.kafka.connect.mirror.formatters.CheckpointFormatter \
>--from-beginning
> -> no record appears in the checkpoint topic until the consumer reaches the 
> end of the topic (ie its consumer group lag gets down to 0).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16622) Mirromaker2 first Checkpoint not emitted until consumer group fully catches up once

2024-04-26 Thread Edoardo Comar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edoardo Comar updated KAFKA-16622:
--
Attachment: connect.log.2024-04-26-10.zip

> Mirromaker2 first Checkpoint not emitted until consumer group fully catches 
> up once
> ---
>
> Key: KAFKA-16622
> URL: https://issues.apache.org/jira/browse/KAFKA-16622
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 3.7.0, 3.6.2, 3.8.0
>Reporter: Edoardo Comar
>Priority: Major
> Attachments: connect.log.2024-04-26-10.zip, 
> edo-connect-mirror-maker-sourcetarget.properties
>
>
> We observed an excessively delayed emission of the MM2 Checkpoint record.
> It only gets created when the source consumer reaches the end of a topic. 
> This does not seem reasonable.
> In a very simple setup :
> Tested with a standalone single process MirrorMaker2 mirroring between two 
> single-node kafka clusters(mirromaker config attached) with quick refresh 
> intervals (eg 5 sec) and a small offset.lag.max (eg 10)
> create a single topic in the source cluster
> produce data to it (e.g. 1 records)
> start a slow consumer - e.g. fetching 50records/poll and pausing 1 sec 
> between polls which commits after each poll
> watch the Checkpoint topic in the target cluster
> bin/kafka-console-consumer.sh --bootstrap-server localhost:9192 \
>   --topic source.checkpoints.internal \
>   --formatter org.apache.kafka.connect.mirror.formatters.CheckpointFormatter \
>--from-beginning
> -> no record appears in the checkpoint topic until the consumer reaches the 
> end of the topic (ie its consumer group lag gets down to 0).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16621) Alter MirrorSourceConnector offsets dont work

2024-04-26 Thread yuzhou (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yuzhou resolved KAFKA-16621.

Resolution: Not A Bug

Fixed in KAFKA-15182: Normalize source connector offsets before invoking 
SourceConnector::alterOffsets (#14003)

> Alter MirrorSourceConnector offsets dont work
> -
>
> Key: KAFKA-16621
> URL: https://issues.apache.org/jira/browse/KAFKA-16621
> Project: Kafka
>  Issue Type: Bug
>  Components: connect
>Reporter: yuzhou
>Priority: Major
> Attachments: image-2024-04-25-21-28-37-375.png
>
>
> In connect-offsets topic:
> the offsets wrote by connector, key is 
> `\{"cluster":"A","partition":2,"topic":"topic"}`
> after alter offsets, the key is  
> `\{"partition":2,"topic":"topic","cluster":"A"}`
> !image-2024-04-25-21-28-37-375.png!
> in Worker.globalOffsetBackingStore.data, both two keys exist, because the are 
> different strings:
> {"cluster":"A","partition":2,"topic":"topic"} {"offset":2}
>  
> {"partition":2,"topic":"topic","cluster":"A"}{"offset":3}
> So alter offsets is not succussful, because when get offsets from 
> globalOffsetBackingStore, always returns 
> {"offset":2}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16585) No way to forward message from punctuation method in the FixedKeyProcessor

2024-04-26 Thread Stanislav Spiridonov (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841079#comment-17841079
 ] 

Stanislav Spiridonov commented on KAFKA-16585:
--

Ok. Let's not make simple things complicated. If the processor is more suitable 
and doesn't (will not) additional overhead - it is the best option to implement 
such things.

> No way to forward message from punctuation method in the FixedKeyProcessor
> --
>
> Key: KAFKA-16585
> URL: https://issues.apache.org/jira/browse/KAFKA-16585
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 3.6.2
>Reporter: Stanislav Spiridonov
>Priority: Major
>
> The FixedKeyProcessorContext can forward only FixedKeyRecord. This class 
> doesn't have a public constructor and can be created based on existing 
> records. But such record usually is absent in the punctuation method.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16622) Mirromaker2 first Checkpoint not emitted until consumer group fully catches up once

2024-04-26 Thread Greg Harris (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841072#comment-17841072
 ] 

Greg Harris commented on KAFKA-16622:
-

[~ecomar] Do you know if the MirrorCheckpointTask started/restarted after the 
MirrorSourceTask replicated the data? This sounds like the known issue 
https://issues.apache.org/jira/browse/KAFKA-15905 .

Or can you post some TRACE logging from OffsetSyncStore, to show how it decided 
to translate using 1?

I would expect that the offsets are translated at some coarse steps between 0 
and 1. With a 10 second translation latency(5+5), you would get translation 
about every 500 records of consumer progress. 1 records is enough to see 
the 512, 1024, 2048, 4096, and 8192 offset translation steps, so there should 
be ~5 checkpoints if it is working optimally.

> Mirromaker2 first Checkpoint not emitted until consumer group fully catches 
> up once
> ---
>
> Key: KAFKA-16622
> URL: https://issues.apache.org/jira/browse/KAFKA-16622
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 3.7.0, 3.6.2, 3.8.0
>Reporter: Edoardo Comar
>Priority: Major
> Attachments: edo-connect-mirror-maker-sourcetarget.properties
>
>
> We observed an excessively delayed emission of the MM2 Checkpoint record.
> It only gets created when the source consumer reaches the end of a topic. 
> This does not seem reasonable.
> In a very simple setup :
> Tested with a standalone single process MirrorMaker2 mirroring between two 
> single-node kafka clusters(mirromaker config attached) with quick refresh 
> intervals (eg 5 sec) and a small offset.lag.max (eg 10)
> create a single topic in the source cluster
> produce data to it (e.g. 1 records)
> start a slow consumer - e.g. fetching 50records/poll and pausing 1 sec 
> between polls which commits after each poll
> watch the Checkpoint topic in the target cluster
> bin/kafka-console-consumer.sh --bootstrap-server localhost:9192 \
>   --topic source.checkpoints.internal \
>   --formatter org.apache.kafka.connect.mirror.formatters.CheckpointFormatter \
>--from-beginning
> -> no record appears in the checkpoint topic until the consumer reaches the 
> end of the topic (ie its consumer group lag gets down to 0).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16627) Remove ClusterConfig parameter in BeforeEach and AfterEach

2024-04-26 Thread Kuan Po Tseng (Jira)
Kuan Po Tseng created KAFKA-16627:
-

 Summary: Remove ClusterConfig parameter in BeforeEach and AfterEach
 Key: KAFKA-16627
 URL: https://issues.apache.org/jira/browse/KAFKA-16627
 Project: Kafka
  Issue Type: Improvement
Reporter: Kuan Po Tseng
Assignee: Kuan Po Tseng


In the past we modify configs like server broker properties by modifying the 
ClusterConfig reference passed to BeforeEach and AfterEach based on the 
requirements of the tests.

While after KAFKA-16560, the ClusterConfig become immutable, modify the 
ClusterConfig reference no longer reflects any changes to the test cluster. 
Then pass ClusterConfig to BeforeEach and AfterEach become redundant. We should 
remove this behavior.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16584) Make log processing summary configurable or debug

2024-04-25 Thread dujian (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841056#comment-17841056
 ] 

dujian commented on KAFKA-16584:


hello  [~mjsax] 

before create KIP, i must create a wiki ID, but “ 
[https://cwiki.apache.org/confluence/signup.action]”  registration function 
turned off,can you help me

> Make log processing summary configurable or debug
> -
>
> Key: KAFKA-16584
> URL: https://issues.apache.org/jira/browse/KAFKA-16584
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 3.6.2
>Reporter: Andras Hatvani
>Assignee: dujian
>Priority: Major
>  Labels: needs-kip, newbie
>
> Currently *every two minutes for every stream thread* statistics will be 
> logged on INFO log level. 
> {code}
> 2024-04-18T09:18:23.790+02:00  INFO 33178 --- [service] [-StreamThread-1] 
> o.a.k.s.p.internals.StreamThread         : stream-thread 
> [service-149405a3-c7e3-4505-8bbd-c3bff226b115-StreamThread-1] Processed 0 
> total records, ran 0 punctuators, and committed 0 total tasks since the last 
> update {code}
> This is absolutely unnecessary and even harmful since it fills the logs and 
> thus storage space with unwanted and useless data. Otherwise the INFO logs 
> are useful and helpful, therefore it's not an option to raise the log level 
> to WARN.
> Please make the logProcessingSummary 
> * either to a DEBUG level log or
> * make it configurable so that it can be disabled.
> This is the relevant code: 
> https://github.com/apache/kafka/blob/aee9724ee15ed539ae73c09cc2c2eda83ae3c864/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamThread.java#L1073



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16626) Uuid to String for subscribed topic names in assignment spec

2024-04-25 Thread Ritika Reddy (Jira)
Ritika Reddy created KAFKA-16626:


 Summary: Uuid to String for subscribed topic names in assignment 
spec
 Key: KAFKA-16626
 URL: https://issues.apache.org/jira/browse/KAFKA-16626
 Project: Kafka
  Issue Type: Sub-task
Reporter: Ritika Reddy
Assignee: Ritika Reddy


In creating the assignment spec from the existing consumer subscription 
metadata, quite some time is spent in converting the String to a Uuid

Change from Uuid to String for the subscribed topics in assignment spec and 
convert on the fly



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16625) Reverse Lookup Partition to Member in Assignors

2024-04-25 Thread Ritika Reddy (Jira)
Ritika Reddy created KAFKA-16625:


 Summary: Reverse Lookup Partition to Member in Assignors
 Key: KAFKA-16625
 URL: https://issues.apache.org/jira/browse/KAFKA-16625
 Project: Kafka
  Issue Type: Sub-task
Reporter: Ritika Reddy
Assignee: Ritika Reddy


Calculating unassigned partitions within the Uniform assignor is a costly 
process, this can be improved by using a reverse lookup map between 
topicIdPartition and the member



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (KAFKA-12534) kafka-configs does not work with ssl enabled kafka broker.

2024-04-25 Thread keith.paulson (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17840985#comment-17840985
 ] 

keith.paulson edited comment on KAFKA-12534 at 4/25/24 9:26 PM:


I can reproduce this with kafka 3.6.0 and BCFKS keystores.  

 

Changing keystore and password gives 
{code:java}
 ERROR Encountered metadata publishing fault: Error updating node with new 
configuration: listener.name.SSL.ssl.key.password -> 
[hidden],listener.name.SSL.ssl.keystore.location -> /etc/ssl/
{code}
{color:#910091}cert{color}
{code:java}
.bcfks in MetadataDelta up to 5940236 
(org.apache.kafka.server.fault.LoggingFaultHandler)
org.apache.kafka.common.config.ConfigException: Validation of dynamic config 
update of SSLFactory failed: org.apache.kafka.common.KafkaException: Failed to 
load SSL keystore /etc/ssl/
{code}
{color:#910091}cert{color}
{code:java}
.bcfks of type BCFKS {code}
but on a kafka restart, that keystore/password combination works fine

 

If I change just the keystore (ie new keystore created using same password as 
previous one), the cert change works
{code:java}
[2024-04-25 21:15:47,065] INFO [DynamicConfigPublisher broker id=1] Updating 
node 1 with new configuration : listener.name.SSL.ssl.keystore.location -> 
/etc/ssl/
{code}
{color:#910091}cert{color}
{code:java}
.bcfks (kafka.server.metadata.DynamicConfigPublisher)
{code}
 

Not being able to change passwords is a significant limitation.


was (Author: JIRAUSER299451):
I can reproduce this with kafka 3.6.0 and BCFKS keystores.  

 

Changing keystore and password gives 
{code:java}
 ERROR Encountered metadata publishing fault: Error updating node with new 
configuration: listener.name.SSL.ssl.key.password -> 
[hidden],listener.name.SSL.ssl.keystore.location -> 
/etc/ssl/private/kafkachain.bcfks in MetadataDelta up to 5940236 
(org.apache.kafka.server.fault.LoggingFaultHandler)
org.apache.kafka.common.config.ConfigException: Validation of dynamic config 
update of SSLFactory failed: org.apache.kafka.common.KafkaException: Failed to 
load SSL keystore /etc/ssl/private/kafkachain.bcfks of type BCFKS {code}
but on a kafka restart, that keystore/password combination works fine

 

If I change just the keystore (ie new keystore created using same password as 
previous one)
{code:java}
[2024-04-25 21:15:47,065] INFO [DynamicConfigPublisher broker id=1] Updating 
node 1 with new configuration : listener.name.SSL.ssl.keystore.location -> 
/etc/ssl/private/kafkachain.bcfks (kafka.server.metadata.DynamicConfigPublisher)
{code}
 

Not being able to change passwords is a significant limitation.

> kafka-configs does not work with ssl enabled kafka broker.
> --
>
> Key: KAFKA-12534
>     URL: https://issues.apache.org/jira/browse/KAFKA-12534
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.6.1
>Reporter: kaushik srinivas
>Priority: Critical
>
> We are trying to change the trust store password on the fly using the 
> kafka-configs script for a ssl enabled kafka broker.
> Below is the command used:
> kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers 
> --entity-name 1001 --alter --add-config 'ssl.truststore.password=xxx'
> But we see below error in the broker logs when the command is run.
> {"type":"log", "host":"kf-2-0", "level":"INFO", 
> "neid":"kafka-cfd5ccf2af7f47868e83473408", "system":"kafka", 
> "time":"2021-03-23T12:14:40.055", "timezone":"UTC", 
> "log":\{"message":"data-plane-kafka-network-thread-1002-ListenerName(SSL)-SSL-2
>  - org.apache.kafka.common.network.Selector - [SocketServer brokerId=1002] 
> Failed authentication with /127.0.0.1 (SSL handshake failed)"}}
>  How can anyone configure ssl certs for the kafka-configs script and succeed 
> with the ssl handshake in this case ? 
> Note : 
> We are trying with a single listener i.e SSL: 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-12534) kafka-configs does not work with ssl enabled kafka broker.

2024-04-25 Thread keith.paulson (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17840985#comment-17840985
 ] 

keith.paulson commented on KAFKA-12534:
---

I can reproduce this with kafka 3.6.0 and BCFKS keystores.  

 

Changing keystore and password gives 
{code:java}
 ERROR Encountered metadata publishing fault: Error updating node with new 
configuration: listener.name.SSL.ssl.key.password -> 
[hidden],listener.name.SSL.ssl.keystore.location -> 
/etc/ssl/private/kafkachain.bcfks in MetadataDelta up to 5940236 
(org.apache.kafka.server.fault.LoggingFaultHandler)
org.apache.kafka.common.config.ConfigException: Validation of dynamic config 
update of SSLFactory failed: org.apache.kafka.common.KafkaException: Failed to 
load SSL keystore /etc/ssl/private/kafkachain.bcfks of type BCFKS {code}
but on a kafka restart, that keystore/password combination works fine

 

If I change just the keystore (ie new keystore created using same password as 
previous one)
{code:java}
[2024-04-25 21:15:47,065] INFO [DynamicConfigPublisher broker id=1] Updating 
node 1 with new configuration : listener.name.SSL.ssl.keystore.location -> 
/etc/ssl/private/kafkachain.bcfks (kafka.server.metadata.DynamicConfigPublisher)
{code}
 

Not being able to change passwords is a significant limitation.

> kafka-configs does not work with ssl enabled kafka broker.
> --
>
> Key: KAFKA-12534
> URL: https://issues.apache.org/jira/browse/KAFKA-12534
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.6.1
>Reporter: kaushik srinivas
>Priority: Critical
>
> We are trying to change the trust store password on the fly using the 
> kafka-configs script for a ssl enabled kafka broker.
> Below is the command used:
> kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers 
> --entity-name 1001 --alter --add-config 'ssl.truststore.password=xxx'
> But we see below error in the broker logs when the command is run.
> {"type":"log", "host":"kf-2-0", "level":"INFO", 
> "neid":"kafka-cfd5ccf2af7f47868e83473408", "system":"kafka", 
> "time":"2021-03-23T12:14:40.055", "timezone":"UTC", 
> "log":\{"message":"data-plane-kafka-network-thread-1002-ListenerName(SSL)-SSL-2
>  - org.apache.kafka.common.network.Selector - [SocketServer brokerId=1002] 
> Failed authentication with /127.0.0.1 (SSL handshake failed)"}}
>  How can anyone configure ssl certs for the kafka-configs script and succeed 
> with the ssl handshake in this case ? 
> Note : 
> We are trying with a single listener i.e SSL: 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16624) Don't generate useless PartitionChangeRecord on older MV

2024-04-25 Thread Colin McCabe (Jira)
Colin McCabe created KAFKA-16624:


 Summary: Don't generate useless PartitionChangeRecord on older MV
 Key: KAFKA-16624
 URL: https://issues.apache.org/jira/browse/KAFKA-16624
 Project: Kafka
  Issue Type: Bug
Reporter: Colin McCabe
Assignee: Colin McCabe


Fix a case where we could generate useless PartitionChangeRecords on metadata 
versions older than 3.6-IV0. This could happen in the case where we had an ISR 
with only one broker in it, and we were trying to go down to a fully empty ISR. 
In this case, PartitionChangeBuilder would block the record to going down to a 
fully empty ISR (since that is not valid in these pre-KIP-966 metadata 
versions), but it would still emit the record, even though it had no effect.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16622) Mirromaker2 first Checkpoint not emitted until consumer group fully catches up once

2024-04-25 Thread Edoardo Comar (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17840966#comment-17840966
 ] 

Edoardo Comar commented on KAFKA-16622:
---

Activating DEBUG logging 
```
[2024-04-25 21:20:10,856] DEBUG [MirrorCheckpointConnector|task-0] 
translateDownstream(mygroup1,mytopic-0,13805): Skipped 
(OffsetSync{topicPartition=mytopic-0, upstreamOffset=1, 
downstreamOffset=1} is ahead of upstream consumer group 13805) 
(org.apache.kafka.connect.mirror.OffsetSyncStore:125)
```
The checkpoint is not emitted because the topic-partition has been mirrorred 
further than where the consumer group is,
so until the group catches up no checkpoints will be emitted.

Question for [~gregharris73]
this behavior would mean that any consumers in groups that are behind the log 
end 
that are switched from consuming from source cluster to the target cluster 
to reprocess the entire partition ? They would have access to no translated 
offsets.


> Mirromaker2 first Checkpoint not emitted until consumer group fully catches 
> up once
> ---
>
> Key: KAFKA-16622
> URL: https://issues.apache.org/jira/browse/KAFKA-16622
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 3.7.0, 3.6.2, 3.8.0
>Reporter: Edoardo Comar
>Priority: Major
> Attachments: edo-connect-mirror-maker-sourcetarget.properties
>
>
> We observed an excessively delayed emission of the MM2 Checkpoint record.
> It only gets created when the source consumer reaches the end of a topic. 
> This does not seem reasonable.
> In a very simple setup :
> Tested with a standalone single process MirrorMaker2 mirroring between two 
> single-node kafka clusters(mirromaker config attached) with quick refresh 
> intervals (eg 5 sec) and a small offset.lag.max (eg 10)
> create a single topic in the source cluster
> produce data to it (e.g. 1 records)
> start a slow consumer - e.g. fetching 50records/poll and pausing 1 sec 
> between polls which commits after each poll
> watch the Checkpoint topic in the target cluster
> bin/kafka-console-consumer.sh --bootstrap-server localhost:9192 \
>   --topic source.checkpoints.internal \
>   --formatter org.apache.kafka.connect.mirror.formatters.CheckpointFormatter \
>--from-beginning
> -> no record appears in the checkpoint topic until the consumer reaches the 
> end of the topic (ie its consumer group lag gets down to 0).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16585) No way to forward message from punctuation method in the FixedKeyProcessor

2024-04-25 Thread Matthias J. Sax (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17840907#comment-17840907
 ] 

Matthias J. Sax commented on KAFKA-16585:
-

{quote}I can use the regular Processor, but as I understand it add some 
overhead comparing with FixedKeyProcessor
{quote}
Where did you get this? The Processor itself does not have overhead. – The only 
think that could happen downstream is, that a unnecessary repartition step 
could be inserted. We are tackling this via 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-759%3A+Unneeded+repartition+canceling]
{quote}{color:#172b4d}Really, I think FixedKeyProcessor do not need to be 
"ensure that the key is not changed". IMHO there is enough to have a key from 
the same partition. So, if you will provide the way to generate the 
{color}*FixedKeyRecord*{color:#172b4d} from any local store it will be 
enough.{color}
{quote}
{color:#172b4d}Well, technically yes, but there is no simply way to 
enforce/check this... We would need to serialize the provided key, pipe it 
through the Partitioner, and compare the computed partition. Or is there some 
other way to do this? – This would be quite expensive to do.{color}

{color:#172b4d}If you feel strong about all this, feel free to do a POC PR and 
write a KIP about it, and we can take it from there. I don't see a simple way 
to do it, and I believe that using a regular Processor is the right way to go 
(especially with KIP-759 on the horizon). {color}

> No way to forward message from punctuation method in the FixedKeyProcessor
> --
>
> Key: KAFKA-16585
> URL: https://issues.apache.org/jira/browse/KAFKA-16585
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 3.6.2
>Reporter: Stanislav Spiridonov
>Priority: Major
>
> The FixedKeyProcessorContext can forward only FixedKeyRecord. This class 
> doesn't have a public constructor and can be created based on existing 
> records. But such record usually is absent in the punctuation method.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16386) NETWORK_EXCEPTIONs from transaction verification are not translated

2024-04-25 Thread Justine Olshan (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justine Olshan updated KAFKA-16386:
---
Fix Version/s: 3.7.1

> NETWORK_EXCEPTIONs from transaction verification are not translated
> ---
>
> Key: KAFKA-16386
> URL: https://issues.apache.org/jira/browse/KAFKA-16386
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.6.0, 3.7.0
>Reporter: Sean Quah
>Priority: Minor
> Fix For: 3.8.0, 3.7.1
>
>
> KAFKA-14402 
> ([KIP-890|https://cwiki.apache.org/confluence/display/KAFKA/KIP-890%3A+Transactions+Server-Side+Defense])
>  adds verification with the transaction coordinator on Produce and 
> TxnOffsetCommit paths as a defense against hanging transactions. For 
> compatibility with older clients, retriable errors from the verification step 
> are translated to ones already expected and handled by existing clients. When 
> verification was added, we forgot to translate {{NETWORK_EXCEPTION}} s.
> [~dajac] noticed this manifesting as a test failure when 
> tests/kafkatest/tests/core/transactions_test.py was run with an older client 
> (prior to the fix for KAFKA-16122):
> {quote}
> {{NETWORK_EXCEPTION}} is indeed returned as a partition error. The 
> {{TransactionManager.TxnOffsetCommitHandler}} considers it as a fatal error 
> so it transitions to the fatal state.
> It seems that there are two cases where the server could return it: (1) When 
> the verification request times out or its connections is cut; or (2) in 
> {{AddPartitionsToTxnManager.addTxnData}} where we say that we use it because 
> we want a retriable error.
> {quote}
> The first case was triggered as part of the test. The second case happens 
> when there is already a verification request ({{AddPartitionsToTxn}}) in 
> flight with the same epoch and we want clients to try again when we're not 
> busy.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16623) KafkaAsyncConsumer system tests warn about revoking partitions that weren't previously assigned

2024-04-25 Thread Kirk True (Jira)
Kirk True created KAFKA-16623:
-

 Summary: KafkaAsyncConsumer system tests warn about revoking 
partitions that weren't previously assigned
 Key: KAFKA-16623
 URL: https://issues.apache.org/jira/browse/KAFKA-16623
 Project: Kafka
  Issue Type: Bug
  Components: clients, consumer, system tests
Affects Versions: 3.8.0
Reporter: Kirk True
Assignee: Kirk True
 Fix For: 3.8.0


When running system tests for the KafkaAsyncConsumer, we occasionally see this 
warning:

{noformat}
  File "/usr/lib/python3.7/threading.py", line 917, in _bootstrap_inner
self.run()
  File "/usr/lib/python3.7/threading.py", line 865, in run
self._target(*self._args, **self._kwargs)
  File 
"/home/jenkins/workspace/system-test-kafka-branch-builder/kafka/venv/lib/python3.7/site-packages/ducktape/services/background_thread.py",
 line 38, in _protected_worker
self._worker(idx, node)
  File 
"/home/jenkins/workspace/system-test-kafka-branch-builder/kafka/tests/kafkatest/services/verifiable_consumer.py",
 line 304, in _worker
handler.handle_partitions_revoked(event, node, self.logger)
  File 
"/home/jenkins/workspace/system-test-kafka-branch-builder/kafka/tests/kafkatest/services/verifiable_consumer.py",
 line 163, in handle_partitions_revoked
(tp, node.account.hostname)
AssertionError: Topic partition TopicPartition(topic='test_topic', partition=0) 
cannot be revoked from worker20 as it was not previously assigned to that 
consumer
{noformat}

It is unclear what is causing this.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16622) Mirromaker2 first Checkpoint not emitted until consumer group fully catches up once

2024-04-25 Thread Edoardo Comar (Jira)
Edoardo Comar created KAFKA-16622:
-

 Summary: Mirromaker2 first Checkpoint not emitted until consumer 
group fully catches up once
 Key: KAFKA-16622
 URL: https://issues.apache.org/jira/browse/KAFKA-16622
 Project: Kafka
  Issue Type: Bug
  Components: mirrormaker
Affects Versions: 3.6.2, 3.7.0, 3.8.0
Reporter: Edoardo Comar
 Attachments: edo-connect-mirror-maker-sourcetarget.properties

We observed an excessively delayed emission of the MM2 Checkpoint record.
It only gets created when the source consumer reaches the end of a topic. This 
does not seem reasonable.

In a very simple setup :

Tested with a standalone single process MirrorMaker2 mirroring between two 
single-node kafka clusters(mirromaker config attached) with quick refresh 
intervals (eg 5 sec) and a small offset.lag.max (eg 10)

create a single topic in the source cluster
produce data to it (e.g. 1 records)
start a slow consumer - e.g. fetching 50records/poll and pausing 1 sec between 
polls which commits after each poll

watch the Checkpoint topic in the target cluster

bin/kafka-console-consumer.sh --bootstrap-server localhost:9192 \
  --topic source.checkpoints.internal \
  --formatter org.apache.kafka.connect.mirror.formatters.CheckpointFormatter \
   --from-beginning

-> no record appears in the checkpoint topic until the consumer reaches the end 
of the topic (ie its consumer group lag gets down to 0).







--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16568) Add JMH Benchmarks for assignor performance testing

2024-04-25 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-16568.
-
Fix Version/s: 3.8.0
   Resolution: Fixed

> Add JMH Benchmarks for assignor performance testing 
> 
>
> Key: KAFKA-16568
> URL: https://issues.apache.org/jira/browse/KAFKA-16568
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ritika Reddy
>Assignee: Ritika Reddy
>Priority: Major
> Fix For: 3.8.0
>
>
> The 3 benchmarks that are being used to test the performance and efficiency 
> of the consumer group rebalance process.
>  * Client Assignors (assign method)
>  * Server Assignors (assign method)
>  * Target Assignment Builder (build method)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16621) Alter MirrorSourceConnector offsets dont work

2024-04-25 Thread yuzhou (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yuzhou updated KAFKA-16621:
---
Description: 
In connect-offsets topic:

the offsets wrote by connector, key is 
`\{"cluster":"A","partition":2,"topic":"topic"}`

after alter offsets, the key is  
`\{"partition":2,"topic":"topic","cluster":"A"}`

!image-2024-04-25-21-28-37-375.png!

in Worker.globalOffsetBackingStore.data, both two keys exist, because the are 
different strings:

{"cluster":"A","partition":2,"topic":"topic"} {"offset":2}

 

{"partition":2,"topic":"topic","cluster":"A"}{"offset":3}

So alter offsets is not succussful, because when get offsets from 
globalOffsetBackingStore, always returns 

{"offset":2}

 

 

  was:
In connect-offsets topic:

the offsets wrote by connector, key is 
`\{"cluster":"A","partition":2,"topic":"topic"}`

after alter offsets, the key is  
`\{"partition":2,"topic":"topic","cluster":"A"}`

!image-2024-04-25-21-28-37-375.png!

in Worker.globalOffsetBackingStore.data, both two keys exist, because the are 
different strings:

{"cluster":"A","partition":2,"topic":"topic"}{"offset":2}

 

{"partition":2,"topic":"topic","cluster":"A"}{"offset":3}

So alter offsets is not succussful, because when get offsets from 
globalOffsetBackingStore, always returns 

{"offset":2}

 

 


> Alter MirrorSourceConnector offsets dont work
> -
>
> Key: KAFKA-16621
> URL: https://issues.apache.org/jira/browse/KAFKA-16621
> Project: Kafka
>  Issue Type: Bug
>  Components: connect
>Reporter: yuzhou
>Priority: Major
> Attachments: image-2024-04-25-21-28-37-375.png
>
>
> In connect-offsets topic:
> the offsets wrote by connector, key is 
> `\{"cluster":"A","partition":2,"topic":"topic"}`
> after alter offsets, the key is  
> `\{"partition":2,"topic":"topic","cluster":"A"}`
> !image-2024-04-25-21-28-37-375.png!
> in Worker.globalOffsetBackingStore.data, both two keys exist, because the are 
> different strings:
> {"cluster":"A","partition":2,"topic":"topic"} {"offset":2}
>  
> {"partition":2,"topic":"topic","cluster":"A"}{"offset":3}
> So alter offsets is not succussful, because when get offsets from 
> globalOffsetBackingStore, always returns 
> {"offset":2}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16621) Alter MirrorSourceConnector offsets dont work

2024-04-25 Thread yuzhou (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yuzhou updated KAFKA-16621:
---
Description: 
In connect-offsets topic:

the offsets wrote by connector, key is 
`\{"cluster":"A","partition":2,"topic":"topic"}`

after alter offsets, the key is  
`\{"partition":2,"topic":"topic","cluster":"A"}`

!image-2024-04-25-21-28-37-375.png!

in Worker.globalOffsetBackingStore.data, both two keys exist, because the are 
different strings:

{"cluster":"A","partition":2,"topic":"topic"} \{"offset":2}

 

{"partition":2,"topic":"topic","cluster":"A"} \{"offset":3}

So alter offsets is not succussful, because when get offsets from 
globalOffsetBackingStore, always returns 

{"offset":2}

 

 

  was:
In connect-offsets topic:

the offsets wrote by connector, key is 
`\{"cluster":"A","partition":2,"topic":"topic"}`

after alter offsets, the key is  
`\{"partition":2,"topic":"topic","cluster":"A"}`

!image-2024-04-25-21-28-37-375.png!

in Worker.globalOffsetBackingStore.data, both two keys exist, because the are 
different strings:

{"cluster":"A","partition":2,"topic":"topic"} {"offset":2}


\{"partition":2,"topic":"topic","cluster":"A"} {"offset":3}

So alter offsets is not succussful, because when get offsets from 
globalOffsetBackingStore, always returns  {"offset":2}
 

 


> Alter MirrorSourceConnector offsets dont work
> -
>
> Key: KAFKA-16621
> URL: https://issues.apache.org/jira/browse/KAFKA-16621
> Project: Kafka
>  Issue Type: Bug
>  Components: connect
>Reporter: yuzhou
>Priority: Major
> Attachments: image-2024-04-25-21-28-37-375.png
>
>
> In connect-offsets topic:
> the offsets wrote by connector, key is 
> `\{"cluster":"A","partition":2,"topic":"topic"}`
> after alter offsets, the key is  
> `\{"partition":2,"topic":"topic","cluster":"A"}`
> !image-2024-04-25-21-28-37-375.png!
> in Worker.globalOffsetBackingStore.data, both two keys exist, because the are 
> different strings:
> {"cluster":"A","partition":2,"topic":"topic"} \{"offset":2}
>  
> {"partition":2,"topic":"topic","cluster":"A"} \{"offset":3}
> So alter offsets is not succussful, because when get offsets from 
> globalOffsetBackingStore, always returns 
> {"offset":2}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16621) Alter MirrorSourceConnector offsets dont work

2024-04-25 Thread yuzhou (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yuzhou updated KAFKA-16621:
---
Description: 
In connect-offsets topic:

the offsets wrote by connector, key is 
`\{"cluster":"A","partition":2,"topic":"topic"}`

after alter offsets, the key is  
`\{"partition":2,"topic":"topic","cluster":"A"}`

!image-2024-04-25-21-28-37-375.png!

in Worker.globalOffsetBackingStore.data, both two keys exist, because the are 
different strings:

{"cluster":"A","partition":2,"topic":"topic"}{"offset":2}

 

{"partition":2,"topic":"topic","cluster":"A"}{"offset":3}

So alter offsets is not succussful, because when get offsets from 
globalOffsetBackingStore, always returns 

{"offset":2}

 

 

  was:
In connect-offsets topic:

the offsets wrote by connector, key is 
`\{"cluster":"A","partition":2,"topic":"topic"}`

after alter offsets, the key is  
`\{"partition":2,"topic":"topic","cluster":"A"}`

!image-2024-04-25-21-28-37-375.png!

in Worker.globalOffsetBackingStore.data, both two keys exist, because the are 
different strings:

{"cluster":"A","partition":2,"topic":"topic"} \{"offset":2}

 

{"partition":2,"topic":"topic","cluster":"A"} \{"offset":3}

So alter offsets is not succussful, because when get offsets from 
globalOffsetBackingStore, always returns 

{"offset":2}

 

 


> Alter MirrorSourceConnector offsets dont work
> -
>
> Key: KAFKA-16621
> URL: https://issues.apache.org/jira/browse/KAFKA-16621
> Project: Kafka
>  Issue Type: Bug
>  Components: connect
>Reporter: yuzhou
>Priority: Major
> Attachments: image-2024-04-25-21-28-37-375.png
>
>
> In connect-offsets topic:
> the offsets wrote by connector, key is 
> `\{"cluster":"A","partition":2,"topic":"topic"}`
> after alter offsets, the key is  
> `\{"partition":2,"topic":"topic","cluster":"A"}`
> !image-2024-04-25-21-28-37-375.png!
> in Worker.globalOffsetBackingStore.data, both two keys exist, because the are 
> different strings:
> {"cluster":"A","partition":2,"topic":"topic"}{"offset":2}
>  
> {"partition":2,"topic":"topic","cluster":"A"}{"offset":3}
> So alter offsets is not succussful, because when get offsets from 
> globalOffsetBackingStore, always returns 
> {"offset":2}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16621) Alter MirrorSourceConnector offsets dont work

2024-04-25 Thread yuzhou (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yuzhou updated KAFKA-16621:
---
Description: 
In connect-offsets topic:

the offsets wrote by connector, key is 
`\{"cluster":"A","partition":2,"topic":"topic"}`

after alter offsets, the key is  
`\{"partition":2,"topic":"topic","cluster":"A"}`

!image-2024-04-25-21-28-37-375.png!

in Worker.globalOffsetBackingStore.data, both two keys exist, because the are 
different strings:

{"cluster":"A","partition":2,"topic":"topic"} {"offset":2}


\{"partition":2,"topic":"topic","cluster":"A"} {"offset":3}

So alter offsets is not succussful, because when get offsets from 
globalOffsetBackingStore, always returns  {"offset":2}
 

 

  was:
In connect-offsets topic:

the offsets wrote by connector, key is 
`\{"cluster":"A","partition":2,"topic":"topic"}`

after alter offsets, the key is  
`\{"partition":2,"topic":"topic","cluster":"A"}`

!image-2024-04-25-21-28-37-375.png!

in Worker.globalOffsetBackingStore.data, both two keys exist, because the are 
different strings:

{"cluster":"A","partition":2,"topic":"topic"} {"offset":2}


{"partition":2,"topic":"topic","cluster":"A"} {"offset":3}

So alter offsets is not succussful, because when get offsets from 
globalOffsetBackingStore, always returns  {"offset":2}
 

 


> Alter MirrorSourceConnector offsets dont work
> -
>
> Key: KAFKA-16621
> URL: https://issues.apache.org/jira/browse/KAFKA-16621
> Project: Kafka
>  Issue Type: Bug
>  Components: connect
>Reporter: yuzhou
>Priority: Major
> Attachments: image-2024-04-25-21-28-37-375.png
>
>
> In connect-offsets topic:
> the offsets wrote by connector, key is 
> `\{"cluster":"A","partition":2,"topic":"topic"}`
> after alter offsets, the key is  
> `\{"partition":2,"topic":"topic","cluster":"A"}`
> !image-2024-04-25-21-28-37-375.png!
> in Worker.globalOffsetBackingStore.data, both two keys exist, because the are 
> different strings:
> {"cluster":"A","partition":2,"topic":"topic"} {"offset":2}
> \{"partition":2,"topic":"topic","cluster":"A"} {"offset":3}
> So alter offsets is not succussful, because when get offsets from 
> globalOffsetBackingStore, always returns  {"offset":2}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16621) Alter MirrorSourceConnector offsets dont work

2024-04-25 Thread yuzhou (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yuzhou updated KAFKA-16621:
---
Description: 
In connect-offsets topic:

the offsets wrote by connector, key is 
`\{"cluster":"A","partition":2,"topic":"topic"}`

after alter offsets, the key is  
`\{"partition":2,"topic":"topic","cluster":"A"}`

!image-2024-04-25-21-28-37-375.png!

in Worker.globalOffsetBackingStore.data, both two keys exist, because the are 
different strings:

{"cluster":"A","partition":2,"topic":"topic"} {"offset":2}


{"partition":2,"topic":"topic","cluster":"A"} {"offset":3}

So alter offsets is not succussful, because when get offsets from 
globalOffsetBackingStore, always returns  {"offset":2}
 

 

  was:
In connect-offsets topic:

the offsets wrote by connector, key is 
`\{"cluster":"A","partition":2,"topic":"topic"}`

after alter offsets, the key is  
`\{"partition":2,"topic":"topic","cluster":"A"}`

!image-2024-04-25-21-28-37-375.png!

in Worker.globalOffsetBackingStore.data, both two keys exist, because the are 
different strings:

{"cluster":"A","partition":2,"topic":"topic"} {"offset":2}

{"partition":2,"topic":"topic","cluster":"A"} {"offset":3}

So alter offsets is not succussful, because when get offsets from 
globalOffsetBackingStore, always returns  {"offset":2}
 

 


> Alter MirrorSourceConnector offsets dont work
> -
>
> Key: KAFKA-16621
> URL: https://issues.apache.org/jira/browse/KAFKA-16621
> Project: Kafka
>  Issue Type: Bug
>  Components: connect
>Reporter: yuzhou
>Priority: Major
> Attachments: image-2024-04-25-21-28-37-375.png
>
>
> In connect-offsets topic:
> the offsets wrote by connector, key is 
> `\{"cluster":"A","partition":2,"topic":"topic"}`
> after alter offsets, the key is  
> `\{"partition":2,"topic":"topic","cluster":"A"}`
> !image-2024-04-25-21-28-37-375.png!
> in Worker.globalOffsetBackingStore.data, both two keys exist, because the are 
> different strings:
> {"cluster":"A","partition":2,"topic":"topic"} {"offset":2}
> {"partition":2,"topic":"topic","cluster":"A"} {"offset":3}
> So alter offsets is not succussful, because when get offsets from 
> globalOffsetBackingStore, always returns  {"offset":2}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16621) Alter MirrorSourceConnector offsets dont work

2024-04-25 Thread yuzhou (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yuzhou updated KAFKA-16621:
---
Description: 
In connect-offsets topic:

the offsets wrote by connector, key is 
`\{"cluster":"A","partition":2,"topic":"topic"}`

after alter offsets, the key is  
`\{"partition":2,"topic":"topic","cluster":"A"}`

!image-2024-04-25-21-28-37-375.png!

in Worker.globalOffsetBackingStore.data, both two keys exist, because the are 
different strings:

{"cluster":"A","partition":2,"topic":"topic"} {"offset":2}

{"partition":2,"topic":"topic","cluster":"A"} {"offset":3}

So alter offsets is not succussful, because when get offsets from 
globalOffsetBackingStore, always returns  {"offset":2}
 

 

  was:
In connect-offsets topic:

the offsets wrote by connector, key is 
`\{"cluster":"A","partition":2,"topic":"topic"}`

after alter offsets, the key is  
`\{"partition":2,"topic":"topic","cluster":"A"}`

!image-2024-04-25-21-28-37-375.png!

in Worker.globalOffsetBackingStore.data, both two keys exist, because the are 
different strings:

{"cluster":"A","partition":2,"topic":"topic"} \{"offset":2}

{"partition":2,"topic":"topic","cluster":"A"} \{"offset":3}

So alter offsets is not succussful, because when get offsets from 
globalOffsetBackingStore, always returns  \{"offset":2}
 

 


> Alter MirrorSourceConnector offsets dont work
> -
>
> Key: KAFKA-16621
> URL: https://issues.apache.org/jira/browse/KAFKA-16621
> Project: Kafka
>  Issue Type: Bug
>  Components: connect
>Reporter: yuzhou
>Priority: Major
> Attachments: image-2024-04-25-21-28-37-375.png
>
>
> In connect-offsets topic:
> the offsets wrote by connector, key is 
> `\{"cluster":"A","partition":2,"topic":"topic"}`
> after alter offsets, the key is  
> `\{"partition":2,"topic":"topic","cluster":"A"}`
> !image-2024-04-25-21-28-37-375.png!
> in Worker.globalOffsetBackingStore.data, both two keys exist, because the are 
> different strings:
> {"cluster":"A","partition":2,"topic":"topic"} {"offset":2}
> {"partition":2,"topic":"topic","cluster":"A"} {"offset":3}
> So alter offsets is not succussful, because when get offsets from 
> globalOffsetBackingStore, always returns  {"offset":2}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16621) Alter MirrorSourceConnector offsets dont work

2024-04-25 Thread yuzhou (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yuzhou updated KAFKA-16621:
---
Description: 
In connect-offsets topic:

the offsets wrote by connector, key is 
`\{"cluster":"A","partition":2,"topic":"topic"}`

after alter offsets, the key is  
`\{"partition":2,"topic":"topic","cluster":"A"}`

!image-2024-04-25-21-28-37-375.png!

in Worker.globalOffsetBackingStore.data, both two keys exist, because the are 
different strings:

{"cluster":"A","partition":2,"topic":"topic"}  -- \{"offset":2}

{"partition":2,"topic":"topic","cluster":"A"} -- \{"offset":3}

So alter offsets is not succussful, because when get offsets from 
globalOffsetBackingStore, always returns  \{"offset":2}
 

 

  was:
In connect-offsets topic:

the offsets wrote by connector, key is 
`\{"cluster":"A","partition":2,"topic":"topic"}`

after alter offsets, the key is  
`\{"partition":2,"topic":"topic","cluster":"A"}`

!image-2024-04-25-21-28-37-375.png!

in Worker.globalOffsetBackingStore.data, both two keys exist, because the are 
different strings:

{"cluster":"A","partition":2,"topic":"topic"}> \{"offset":2}

{"partition":2,"topic":"topic","cluster":"A"}> \{"offset":3}

So alter offsets is not succussful, because when get offsets from 
globalOffsetBackingStore, always returns  \{"offset":2}
 

 


> Alter MirrorSourceConnector offsets dont work
> -
>
> Key: KAFKA-16621
> URL: https://issues.apache.org/jira/browse/KAFKA-16621
> Project: Kafka
>  Issue Type: Bug
>  Components: connect
>Reporter: yuzhou
>Priority: Major
> Attachments: image-2024-04-25-21-28-37-375.png
>
>
> In connect-offsets topic:
> the offsets wrote by connector, key is 
> `\{"cluster":"A","partition":2,"topic":"topic"}`
> after alter offsets, the key is  
> `\{"partition":2,"topic":"topic","cluster":"A"}`
> !image-2024-04-25-21-28-37-375.png!
> in Worker.globalOffsetBackingStore.data, both two keys exist, because the are 
> different strings:
> {"cluster":"A","partition":2,"topic":"topic"}  -- \{"offset":2}
> {"partition":2,"topic":"topic","cluster":"A"} -- \{"offset":3}
> So alter offsets is not succussful, because when get offsets from 
> globalOffsetBackingStore, always returns  \{"offset":2}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16621) Alter MirrorSourceConnector offsets dont work

2024-04-25 Thread yuzhou (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yuzhou updated KAFKA-16621:
---
Description: 
In connect-offsets topic:

the offsets wrote by connector, key is 
`\{"cluster":"A","partition":2,"topic":"topic"}`

after alter offsets, the key is  
`\{"partition":2,"topic":"topic","cluster":"A"}`

!image-2024-04-25-21-28-37-375.png!

in Worker.globalOffsetBackingStore.data, both two keys exist, because the are 
different strings:

{"cluster":"A","partition":2,"topic":"topic"}\{"offset":2}

{"partition":2,"topic":"topic","cluster":"A"} {"offset":3}

So alter offsets is not succussful, because when get offsets from 
globalOffsetBackingStore, always returns  \{"offset":2}
 

 

  was:
In connect-offsets topic:

the offsets wrote by connector, key is 
`\{"cluster":"A","partition":2,"topic":"topic"}`

after alter offsets, the key is  
`\{"partition":2,"topic":"topic","cluster":"A"}`

!image-2024-04-25-21-28-37-375.png!

in Worker.globalOffsetBackingStore.data, both two keys exist, because the are 
different strings:

{"cluster":"A","partition":2,"topic":"topic"}               {"offset":2}

{"partition":2,"topic":"topic","cluster":"A"}               \{"offset":3}

So alter offsets is not succussful, because when get offsets from 
globalOffsetBackingStore, always returns  \{"offset":2}
 

 


> Alter MirrorSourceConnector offsets dont work
> -
>
> Key: KAFKA-16621
> URL: https://issues.apache.org/jira/browse/KAFKA-16621
> Project: Kafka
>  Issue Type: Bug
>  Components: connect
>Reporter: yuzhou
>Priority: Major
> Attachments: image-2024-04-25-21-28-37-375.png
>
>
> In connect-offsets topic:
> the offsets wrote by connector, key is 
> `\{"cluster":"A","partition":2,"topic":"topic"}`
> after alter offsets, the key is  
> `\{"partition":2,"topic":"topic","cluster":"A"}`
> !image-2024-04-25-21-28-37-375.png!
> in Worker.globalOffsetBackingStore.data, both two keys exist, because the are 
> different strings:
> {"cluster":"A","partition":2,"topic":"topic"}\{"offset":2}
> {"partition":2,"topic":"topic","cluster":"A"} {"offset":3}
> So alter offsets is not succussful, because when get offsets from 
> globalOffsetBackingStore, always returns  \{"offset":2}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16621) Alter MirrorSourceConnector offsets dont work

2024-04-25 Thread yuzhou (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yuzhou updated KAFKA-16621:
---
Description: 
In connect-offsets topic:

the offsets wrote by connector, key is 
`\{"cluster":"A","partition":2,"topic":"topic"}`

after alter offsets, the key is  
`\{"partition":2,"topic":"topic","cluster":"A"}`

!image-2024-04-25-21-28-37-375.png!

in Worker.globalOffsetBackingStore.data, both two keys exist, because the are 
different strings:

{"cluster":"A","partition":2,"topic":"topic"} \{"offset":2}

{"partition":2,"topic":"topic","cluster":"A"} \{"offset":3}

So alter offsets is not succussful, because when get offsets from 
globalOffsetBackingStore, always returns  \{"offset":2}
 

 

  was:
In connect-offsets topic:

the offsets wrote by connector, key is 
`\{"cluster":"A","partition":2,"topic":"topic"}`

after alter offsets, the key is  
`\{"partition":2,"topic":"topic","cluster":"A"}`

!image-2024-04-25-21-28-37-375.png!

in Worker.globalOffsetBackingStore.data, both two keys exist, because the are 
different strings:

{"cluster":"A","partition":2,"topic":"topic"}\{"offset":2}

{"partition":2,"topic":"topic","cluster":"A"} {"offset":3}

So alter offsets is not succussful, because when get offsets from 
globalOffsetBackingStore, always returns  \{"offset":2}
 

 


> Alter MirrorSourceConnector offsets dont work
> -
>
> Key: KAFKA-16621
> URL: https://issues.apache.org/jira/browse/KAFKA-16621
> Project: Kafka
>  Issue Type: Bug
>  Components: connect
>Reporter: yuzhou
>Priority: Major
> Attachments: image-2024-04-25-21-28-37-375.png
>
>
> In connect-offsets topic:
> the offsets wrote by connector, key is 
> `\{"cluster":"A","partition":2,"topic":"topic"}`
> after alter offsets, the key is  
> `\{"partition":2,"topic":"topic","cluster":"A"}`
> !image-2024-04-25-21-28-37-375.png!
> in Worker.globalOffsetBackingStore.data, both two keys exist, because the are 
> different strings:
> {"cluster":"A","partition":2,"topic":"topic"} \{"offset":2}
> {"partition":2,"topic":"topic","cluster":"A"} \{"offset":3}
> So alter offsets is not succussful, because when get offsets from 
> globalOffsetBackingStore, always returns  \{"offset":2}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16621) Alter MirrorSourceConnector offsets dont work

2024-04-25 Thread yuzhou (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yuzhou updated KAFKA-16621:
---
Description: 
In connect-offsets topic:

the offsets wrote by connector, key is 
`\{"cluster":"A","partition":2,"topic":"topic"}`

after alter offsets, the key is  
`\{"partition":2,"topic":"topic","cluster":"A"}`

!image-2024-04-25-21-28-37-375.png!

in Worker.globalOffsetBackingStore.data, both two keys exist, because the are 
different strings:

{"cluster":"A","partition":2,"topic":"topic"}               {"offset":2}

{"partition":2,"topic":"topic","cluster":"A"}               \{"offset":3}

So alter offsets is not succussful, because when get offsets from 
globalOffsetBackingStore, always returns  \{"offset":2}
 

 

  was:
In connect-offsets topic:

the offsets wrote by connector, key is 
`\{"cluster":"A","partition":2,"topic":"topic"}`

after alter offsets, the key is  
`\{"partition":2,"topic":"topic","cluster":"A"}`

!image-2024-04-25-21-28-37-375.png!

in Worker.globalOffsetBackingStore.data, both two keys exist, because the are 
different strings:

{"cluster":"A","partition":2,"topic":"topic"}  -- \{"offset":2}

{"partition":2,"topic":"topic","cluster":"A"} -- \{"offset":3}

So alter offsets is not succussful, because when get offsets from 
globalOffsetBackingStore, always returns  \{"offset":2}
 

 


> Alter MirrorSourceConnector offsets dont work
> -
>
> Key: KAFKA-16621
> URL: https://issues.apache.org/jira/browse/KAFKA-16621
> Project: Kafka
>  Issue Type: Bug
>  Components: connect
>Reporter: yuzhou
>Priority: Major
> Attachments: image-2024-04-25-21-28-37-375.png
>
>
> In connect-offsets topic:
> the offsets wrote by connector, key is 
> `\{"cluster":"A","partition":2,"topic":"topic"}`
> after alter offsets, the key is  
> `\{"partition":2,"topic":"topic","cluster":"A"}`
> !image-2024-04-25-21-28-37-375.png!
> in Worker.globalOffsetBackingStore.data, both two keys exist, because the are 
> different strings:
> {"cluster":"A","partition":2,"topic":"topic"}               {"offset":2}
> {"partition":2,"topic":"topic","cluster":"A"}               \{"offset":3}
> So alter offsets is not succussful, because when get offsets from 
> globalOffsetBackingStore, always returns  \{"offset":2}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16621) Alter MirrorSourceConnector offsets dont work

2024-04-25 Thread yuzhou (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yuzhou updated KAFKA-16621:
---
Description: 
In connect-offsets topic:

the offsets wrote by connector, key is 
`\{"cluster":"A","partition":2,"topic":"topic"}`

after alter offsets, the key is  
`\{"partition":2,"topic":"topic","cluster":"A"}`

!image-2024-04-25-21-28-37-375.png!

in Worker.globalOffsetBackingStore.data, both two keys exist, because the are 
different strings:

{"cluster":"A","partition":2,"topic":"topic"}  > \{"offset":2}

{"partition":2,"topic":"topic","cluster":"A"}  > \{"offset":3}

So alter offsets is not succussful, because when get offsets from 
globalOffsetBackingStore, always returns  \{"offset":2}
 

 

  was:
In connect-offsets topic:

the offsets wrote by connector, key is 
`\{"cluster":"A","partition":2,"topic":"topic"}`

after alter offsets, the key is  
`\{"partition":2,"topic":"topic","cluster":"A"}`

!image-2024-04-25-21-28-37-375.png!

in Worker.globalOffsetBackingStore.data, both two keys exist, because the are 
different strings:

{"cluster":"A","partition":2,"topic":"topic"}  -> \{"offset":2}

{"partition":2,"topic":"topic","cluster":"A"}  -> \{"offset":3}

So alter offsets is not succussful, because when get offsets from 
globalOffsetBackingStore, always returns  \{"offset":2}
 

 


> Alter MirrorSourceConnector offsets dont work
> -
>
> Key: KAFKA-16621
> URL: https://issues.apache.org/jira/browse/KAFKA-16621
> Project: Kafka
>  Issue Type: Bug
>  Components: connect
>Reporter: yuzhou
>Priority: Major
> Attachments: image-2024-04-25-21-28-37-375.png
>
>
> In connect-offsets topic:
> the offsets wrote by connector, key is 
> `\{"cluster":"A","partition":2,"topic":"topic"}`
> after alter offsets, the key is  
> `\{"partition":2,"topic":"topic","cluster":"A"}`
> !image-2024-04-25-21-28-37-375.png!
> in Worker.globalOffsetBackingStore.data, both two keys exist, because the are 
> different strings:
> {"cluster":"A","partition":2,"topic":"topic"}  > \{"offset":2}
> {"partition":2,"topic":"topic","cluster":"A"}  > \{"offset":3}
> So alter offsets is not succussful, because when get offsets from 
> globalOffsetBackingStore, always returns  \{"offset":2}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   3   4   5   6   7   8   9   10   >