[jira] [Commented] (KAFKA-16908) Refactor QuorumConfig with AbstractConfig

2024-06-06 Thread Johnny Hsu (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17852848#comment-17852848
 ] 

Johnny Hsu commented on KAFKA-16908:


hi [~chia7712] , may I know if you are working on this? if not I would like to 
work on it, thanks! 

> Refactor QuorumConfig with AbstractConfig
> -
>
> Key: KAFKA-16908
> URL: https://issues.apache.org/jira/browse/KAFKA-16908
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Minor
>
> This is similar to KAFKA-16884



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16672) Fix flaky DedicatedMirrorIntegrationTest.testMultiNodeCluster

2024-05-09 Thread Johnny Hsu (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844986#comment-17844986
 ] 

Johnny Hsu commented on KAFKA-16672:


from DistributedHerder, the comment

> Similar to handling HTTP requests, config changes which are observed 
>asynchronously by polling the config log are batched for handling in the work 
>thread.

 

Thus, when there is a rebalance, the herder would throw an error 
`RebalanceNeededException` (in DistributedHerder, L2309). I think this should 
be an retryable exception since it's possible and we should wait for a while 
and try again.

[~chia7712] I am happy to fix this.

> Fix flaky DedicatedMirrorIntegrationTest.testMultiNodeCluster
> -
>
> Key: KAFKA-16672
> URL: https://issues.apache.org/jira/browse/KAFKA-16672
> Project: Kafka
>  Issue Type: Test
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Minor
>
> It is flaky on my jenkins, and sometimes it fails in Kafka CI[0]
> The error happens in virtue of race condition. `KafkaBasedLog` loads records 
> from topic via thread, so `RebalanceNeededException` will be thrown if we 
> check the task configs too soon. It seems to me `RebalanceNeededException` is 
> a temporary exception so we should treat it as a retryable exception in 
> waiting.
> In short, we should catch `RebalanceNeededException` in 
> `awaitTaskConfigurations` [1] 
> [0] 
> https://ge.apache.org/scans/tests?search.buildOutcome=failure=gradle=P28D=kafka=Asia%2FTaipei=org.apache.kafka.connect.mirror.integration.DedicatedMirrorIntegrationTest=testMultiNodeCluster()
> [1] 
> https://github.com/apache/kafka/blob/55a00be4e973f3f4c8869b6f70de1e285719e890/connect/mirror/src/test/java/org/apache/kafka/connect/mirror/integration/DedicatedMirrorIntegrationTest.java#L355



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16684) FetchResponse#responseData could return incorrect data

2024-05-07 Thread Johnny Hsu (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844496#comment-17844496
 ] 

Johnny Hsu commented on KAFKA-16684:


hi [~chia7712] 
May i know if you are working on this? if not I am happy to help :) 

> FetchResponse#responseData could return incorrect data
> --
>
> Key: KAFKA-16684
> URL: https://issues.apache.org/jira/browse/KAFKA-16684
> Project: Kafka
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Minor
>
> [https://github.com/apache/kafka/commit/2b8aff58b575c199ee8372e5689420c9d77357a5]
>  make it accept input to return "partial" data. The content of output is 
> based on the input but we cache the output ... It will return same output 
> even though we pass different input. That is a potential bug.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16668) Enable to set tags by `ClusterTest`

2024-05-04 Thread Johnny Hsu (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843482#comment-17843482
 ] 

Johnny Hsu commented on KAFKA-16668:


hi [~chia7712] 
may I know if you are working on this? if not I am willing to help, thanks!

> Enable to set tags by `ClusterTest` 
> 
>
> Key: KAFKA-16668
> URL: https://issues.apache.org/jira/browse/KAFKA-16668
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Minor
>
> Currently, the display name can be customized by only `name` 
> (https://github.com/apache/kafka/blob/trunk/core/src/test/java/kafka/test/annotation/ClusterTest.java#L42).
>  However, the "key" is hard-code to "name=xxx". Also, it is impossible to set 
> more "tags" for display name. 
> https://github.com/apache/kafka/pull/15766 is a example that we want to add 
> "xxx=bbb" to display name.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (KAFKA-14579) Move DumpLogSegments to tools

2024-05-04 Thread Johnny Hsu (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843421#comment-17843421
 ] 

Johnny Hsu edited comment on KAFKA-14579 at 5/4/24 10:36 AM:
-

currently only DumpLogSegments is using Decoder, if it's removed then Decoder 
should be safe to be deprecated since no one will be using that anymore.
I am willing to work on the KIP for this


was (Author: JIRAUSER304478):
currently only DumpLogSegments is using Decoder, if it's removed then Decoder 
should be safe to be deprecated since no one will be using that anymore.
I am willing to work on the KIP for this :) 

> Move DumpLogSegments to tools
> -
>
> Key: KAFKA-14579
> URL: https://issues.apache.org/jira/browse/KAFKA-14579
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Mickael Maison
>Assignee: Alexandre Dupriez
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (KAFKA-14579) Move DumpLogSegments to tools

2024-05-04 Thread Johnny Hsu (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843421#comment-17843421
 ] 

Johnny Hsu edited comment on KAFKA-14579 at 5/4/24 10:36 AM:
-

currently only DumpLogSegments is using Decoder, if it's removed then Decoder 
should be safe to be deprecated since no one will be using that anymore.
I am willing to work on the KIP for this :) 


was (Author: JIRAUSER304478):
currently only DumpLogSegments is using Decoder, if it's removed then Decoder 
should be safe to be deprecated since no one will be using that anymore 

> Move DumpLogSegments to tools
> -
>
> Key: KAFKA-14579
> URL: https://issues.apache.org/jira/browse/KAFKA-14579
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Mickael Maison
>Assignee: Alexandre Dupriez
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-14579) Move DumpLogSegments to tools

2024-05-04 Thread Johnny Hsu (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843421#comment-17843421
 ] 

Johnny Hsu commented on KAFKA-14579:


currently only DumpLogSegments is using Decoder, if it's removed then Decoder 
should be safe to be deprecated since no one will be using that anymore 

> Move DumpLogSegments to tools
> -
>
> Key: KAFKA-14579
> URL: https://issues.apache.org/jira/browse/KAFKA-14579
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Mickael Maison
>Assignee: Alexandre Dupriez
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (KAFKA-16174) Flaky test: testDescribeQuorumStatusSuccessful – org.apache.kafka.tools.MetadataQuorumCommandTest

2024-05-04 Thread Johnny Hsu (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843403#comment-17843403
 ] 

Johnny Hsu edited comment on KAFKA-16174 at 5/4/24 8:21 AM:


the exception is from 
[https://github.com/apache/kafka/blob/9b8aac22ec7ce927a2ceb2bfe7afd57419ee946c/core/src/main/scala/kafka/server/BrokerServer.scala#L474]

when the cluster starts, 
[https://github.com/apache/kafka/blob/9b8aac22ec7ce927a2ceb2bfe7afd57419ee946c/core/src/test/java/kafka/testkit/KafkaClusterTestKit.java#L426]
 tries to init broker, but it failed to get the response from controller 


was (Author: JIRAUSER304478):
the exception is from 
[https://github.com/apache/kafka/blob/9b8aac22ec7ce927a2ceb2bfe7afd57419ee946c/core/src/main/scala/kafka/server/BrokerServer.scala#L474]

when the cluster starts, 
[https://github.com/apache/kafka/blob/9b8aac22ec7ce927a2ceb2bfe7afd57419ee946c/core/src/test/java/kafka/testkit/KafkaClusterTestKit.java#L426]
 tries to init broker, but it failed. 

> Flaky test: testDescribeQuorumStatusSuccessful – 
> org.apache.kafka.tools.MetadataQuorumCommandTest
> -
>
> Key: KAFKA-16174
> URL: https://issues.apache.org/jira/browse/KAFKA-16174
> Project: Kafka
>  Issue Type: Test
>Reporter: Apoorv Mittal
>Assignee: Johnny Hsu
>Priority: Major
>  Labels: flaky-test
>
> [https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15190/3/tests/]
>  
> {code:java}
> Errorjava.util.concurrent.ExecutionException: java.lang.RuntimeException: 
> Received a fatal error while waiting for the controller to acknowledge that 
> we are caught upStacktracejava.util.concurrent.ExecutionException: 
> java.lang.RuntimeException: Received a fatal error while waiting for the 
> controller to acknowledge that we are caught up at 
> java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191)   at 
> kafka.testkit.KafkaClusterTestKit.startup(KafkaClusterTestKit.java:421)  
> at 
> kafka.test.junit.RaftClusterInvocationContext.lambda$getAdditionalExtensions$5(RaftClusterInvocationContext.java:116)
> at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeBeforeTestExecutionCallbacks$5(TestMethodTestDescriptor.java:192)
>   at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeBeforeMethodsOrCallbacksUntilExceptionOccurs$6(TestMethodTestDescriptor.java:203)
>   at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
> at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeBeforeMethodsOrCallbacksUntilExceptionOccurs(TestMethodTestDescriptor.java:203)
>at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeBeforeTestExecutionCallbacks(TestMethodTestDescriptor.java:191)
>at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:137)
>   at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:69)
>at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151)
>at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
> at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
>at 
> org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
> at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
>at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
> at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
> at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
> at 
> org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:35)
>at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask$DefaultDynamicTestExecutor.execute(NodeTestTask.java:226)
> at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask$DefaultDynamicTestExecutor.execute(NodeTestTask.java:204)
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16174) Flaky test: testDescribeQuorumStatusSuccessful – org.apache.kafka.tools.MetadataQuorumCommandTest

2024-05-04 Thread Johnny Hsu (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843403#comment-17843403
 ] 

Johnny Hsu commented on KAFKA-16174:


the exception is from 
[https://github.com/apache/kafka/blob/9b8aac22ec7ce927a2ceb2bfe7afd57419ee946c/core/src/main/scala/kafka/server/BrokerServer.scala#L474]

when the cluster starts, 
[https://github.com/apache/kafka/blob/9b8aac22ec7ce927a2ceb2bfe7afd57419ee946c/core/src/test/java/kafka/testkit/KafkaClusterTestKit.java#L426]
 tries to init broker, but it failed. 

> Flaky test: testDescribeQuorumStatusSuccessful – 
> org.apache.kafka.tools.MetadataQuorumCommandTest
> -
>
> Key: KAFKA-16174
> URL: https://issues.apache.org/jira/browse/KAFKA-16174
> Project: Kafka
>  Issue Type: Test
>Reporter: Apoorv Mittal
>Assignee: Johnny Hsu
>Priority: Major
>  Labels: flaky-test
>
> [https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15190/3/tests/]
>  
> {code:java}
> Errorjava.util.concurrent.ExecutionException: java.lang.RuntimeException: 
> Received a fatal error while waiting for the controller to acknowledge that 
> we are caught upStacktracejava.util.concurrent.ExecutionException: 
> java.lang.RuntimeException: Received a fatal error while waiting for the 
> controller to acknowledge that we are caught up at 
> java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191)   at 
> kafka.testkit.KafkaClusterTestKit.startup(KafkaClusterTestKit.java:421)  
> at 
> kafka.test.junit.RaftClusterInvocationContext.lambda$getAdditionalExtensions$5(RaftClusterInvocationContext.java:116)
> at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeBeforeTestExecutionCallbacks$5(TestMethodTestDescriptor.java:192)
>   at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeBeforeMethodsOrCallbacksUntilExceptionOccurs$6(TestMethodTestDescriptor.java:203)
>   at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
> at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeBeforeMethodsOrCallbacksUntilExceptionOccurs(TestMethodTestDescriptor.java:203)
>at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeBeforeTestExecutionCallbacks(TestMethodTestDescriptor.java:191)
>at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:137)
>   at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:69)
>at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151)
>at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
> at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
>at 
> org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
> at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
>at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
> at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
> at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
> at 
> org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:35)
>at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask$DefaultDynamicTestExecutor.execute(NodeTestTask.java:226)
> at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask$DefaultDynamicTestExecutor.execute(NodeTestTask.java:204)
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-16174) Flaky test: testDescribeQuorumStatusSuccessful – org.apache.kafka.tools.MetadataQuorumCommandTest

2024-05-03 Thread Johnny Hsu (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Johnny Hsu reassigned KAFKA-16174:
--

Assignee: Johnny Hsu

> Flaky test: testDescribeQuorumStatusSuccessful – 
> org.apache.kafka.tools.MetadataQuorumCommandTest
> -
>
> Key: KAFKA-16174
> URL: https://issues.apache.org/jira/browse/KAFKA-16174
> Project: Kafka
>  Issue Type: Test
>Reporter: Apoorv Mittal
>Assignee: Johnny Hsu
>Priority: Major
>  Labels: flaky-test
>
> [https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15190/3/tests/]
>  
> {code:java}
> Errorjava.util.concurrent.ExecutionException: java.lang.RuntimeException: 
> Received a fatal error while waiting for the controller to acknowledge that 
> we are caught upStacktracejava.util.concurrent.ExecutionException: 
> java.lang.RuntimeException: Received a fatal error while waiting for the 
> controller to acknowledge that we are caught up at 
> java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191)   at 
> kafka.testkit.KafkaClusterTestKit.startup(KafkaClusterTestKit.java:421)  
> at 
> kafka.test.junit.RaftClusterInvocationContext.lambda$getAdditionalExtensions$5(RaftClusterInvocationContext.java:116)
> at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeBeforeTestExecutionCallbacks$5(TestMethodTestDescriptor.java:192)
>   at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeBeforeMethodsOrCallbacksUntilExceptionOccurs$6(TestMethodTestDescriptor.java:203)
>   at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
> at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeBeforeMethodsOrCallbacksUntilExceptionOccurs(TestMethodTestDescriptor.java:203)
>at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeBeforeTestExecutionCallbacks(TestMethodTestDescriptor.java:191)
>at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:137)
>   at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:69)
>at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151)
>at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
> at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
>at 
> org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
> at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
>at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
> at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
> at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
> at 
> org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:35)
>at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask$DefaultDynamicTestExecutor.execute(NodeTestTask.java:226)
> at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask$DefaultDynamicTestExecutor.execute(NodeTestTask.java:204)
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16174) Flaky test: testDescribeQuorumStatusSuccessful – org.apache.kafka.tools.MetadataQuorumCommandTest

2024-05-03 Thread Johnny Hsu (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843155#comment-17843155
 ] 

Johnny Hsu commented on KAFKA-16174:


[~apoorvmittal10] may I know if you are working on this ticket? if not I am 
willing to help :) 

> Flaky test: testDescribeQuorumStatusSuccessful – 
> org.apache.kafka.tools.MetadataQuorumCommandTest
> -
>
> Key: KAFKA-16174
> URL: https://issues.apache.org/jira/browse/KAFKA-16174
> Project: Kafka
>  Issue Type: Test
>Reporter: Apoorv Mittal
>Priority: Major
>  Labels: flaky-test
>
> [https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15190/3/tests/]
>  
> {code:java}
> Errorjava.util.concurrent.ExecutionException: java.lang.RuntimeException: 
> Received a fatal error while waiting for the controller to acknowledge that 
> we are caught upStacktracejava.util.concurrent.ExecutionException: 
> java.lang.RuntimeException: Received a fatal error while waiting for the 
> controller to acknowledge that we are caught up at 
> java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191)   at 
> kafka.testkit.KafkaClusterTestKit.startup(KafkaClusterTestKit.java:421)  
> at 
> kafka.test.junit.RaftClusterInvocationContext.lambda$getAdditionalExtensions$5(RaftClusterInvocationContext.java:116)
> at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeBeforeTestExecutionCallbacks$5(TestMethodTestDescriptor.java:192)
>   at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeBeforeMethodsOrCallbacksUntilExceptionOccurs$6(TestMethodTestDescriptor.java:203)
>   at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
> at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeBeforeMethodsOrCallbacksUntilExceptionOccurs(TestMethodTestDescriptor.java:203)
>at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeBeforeTestExecutionCallbacks(TestMethodTestDescriptor.java:191)
>at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:137)
>   at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:69)
>at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151)
>at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
> at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
>at 
> org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
> at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
>at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
> at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
> at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
> at 
> org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:35)
>at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask$DefaultDynamicTestExecutor.execute(NodeTestTask.java:226)
> at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask$DefaultDynamicTestExecutor.execute(NodeTestTask.java:204)
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16027) Refactor MetadataTest#testUpdatePartitionLeadership

2024-05-02 Thread Johnny Hsu (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17842946#comment-17842946
 ] 

Johnny Hsu commented on KAFKA-16027:


[~alexanderaghili] thanks for replying!
got you, then I will close my draft and let's discuss on your PR :)

> Refactor MetadataTest#testUpdatePartitionLeadership
> ---
>
> Key: KAFKA-16027
> URL: https://issues.apache.org/jira/browse/KAFKA-16027
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Philip Nee
>Assignee: Alexander Aghili
>Priority: Minor
>  Labels: newbie
>
> MetadataTest#testUpdatePartitionLeadership is extremely long.  I think it is 
> pretty close to the 160 line method limit - I tried to modfity it but it 
> would hit the limit when i tried to break things into separated lines.
> The test also contains two tests, so it is best to split it into two separate 
> tests.
> We should also move this to ConsumerMetadata.java



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16027) Refactor MetadataTest#testUpdatePartitionLeadership

2024-04-30 Thread Johnny Hsu (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17842537#comment-17842537
 ] 

Johnny Hsu commented on KAFKA-16027:


hey [~alexanderaghili] may I know if we have any updates on this?
I am happy to help if you are busy with something else :) 

> Refactor MetadataTest#testUpdatePartitionLeadership
> ---
>
> Key: KAFKA-16027
> URL: https://issues.apache.org/jira/browse/KAFKA-16027
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Philip Nee
>Assignee: Alexander Aghili
>Priority: Minor
>  Labels: newbie
>
> MetadataTest#testUpdatePartitionLeadership is extremely long.  I think it is 
> pretty close to the 160 line method limit - I tried to modfity it but it 
> would hit the limit when i tried to break things into separated lines.
> The test also contains two tests, so it is best to split it into two separate 
> tests.
> We should also move this to ConsumerMetadata.java



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16553) log controller configs when startup

2024-04-29 Thread Johnny Hsu (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17842011#comment-17842011
 ] 

Johnny Hsu commented on KAFKA-16553:


hey [~chia7712] , I would like to address this first with the current approach, 
and refactor this part after the changes in 13105, so that we can get this 
controller log first. wdyt?

> log controller configs when startup
> ---
>
> Key: KAFKA-16553
> URL: https://issues.apache.org/jira/browse/KAFKA-16553
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Minor
>
> We can't observe the controller configs from the log file. We can copy the 
> solution used by broker 
> (https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/server/BrokerServer.scala#L492).
> Or this issue should be blocked by 
> https://issues.apache.org/jira/browse/KAFKA-13105 to wait for more graceful 
> solution.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-15897) Flaky Test: testWrongIncarnationId() – kafka.server.ControllerRegistrationManagerTest

2024-04-29 Thread Johnny Hsu (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841998#comment-17841998
 ] 

Johnny Hsu commented on KAFKA-15897:


previously I thought that the poll should be fine, since the 
`rpcStats(manager)` always returns the current status of the manager, which 
means that no matter whether there are race conditions or not, if the 
prepareResponseFrom was executed before the second poll, it should not fail. 
However, actually there is another event queue thread which append the request. 
Thus, if there are race conditions between the first and second poll, this 
could lead to the second poll failing, because the request could be taken away 
by the first poll.

Thanks [~chia7712] for the through discussion, I am willing to fix this :)

> Flaky Test: testWrongIncarnationId() – 
> kafka.server.ControllerRegistrationManagerTest
> -
>
> Key: KAFKA-15897
> URL: https://issues.apache.org/jira/browse/KAFKA-15897
> Project: Kafka
>  Issue Type: Test
>Reporter: Apoorv Mittal
>Assignee: Chia-Ping Tsai
>Priority: Major
>  Labels: flaky-test
>
> Build run: 
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-14699/21/tests/
>  
> {code:java}
> org.opentest4j.AssertionFailedError: expected: <(false,1,0)> but was: 
> <(true,0,0)>Stacktraceorg.opentest4j.AssertionFailedError: expected: 
> <(false,1,0)> but was: <(true,0,0)>at 
> app//org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
>at 
> app//org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
>at 
> app//org.junit.jupiter.api.AssertEquals.failNotEqual(AssertEquals.java:197)  
> at 
> app//org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:182)  
> at 
> app//org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:177)  
> at app//org.junit.jupiter.api.Assertions.assertEquals(Assertions.java:1141)   
>   at 
> app//kafka.server.ControllerRegistrationManagerTest.$anonfun$testWrongIncarnationId$3(ControllerRegistrationManagerTest.scala:228)
>at 
> app//org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:379)
>  at 
> app//kafka.server.ControllerRegistrationManagerTest.testWrongIncarnationId(ControllerRegistrationManagerTest.scala:226)
>   at 
> java.base@17.0.7/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)at 
> java.base@17.0.7/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
>   at 
> java.base@17.0.7/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base@17.0.7/java.lang.reflect.Method.invoke(Method.java:568)
> at 
> app//org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:728)
>   at 
> app//org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
>at 
> app//org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
>  at 
> app//org.junit.jupiter.engine.extension.SameThreadTimeoutInvocation.proceed(SameThreadTimeoutInvocation.java:45)
>  at 
> app//org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:156)
> at 
> app//org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:147)
>   at 
> app//org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:86)
>at 
> app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(InterceptingExecutableInvoker.java:103)
> at 
> app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.lambda$invoke$0(InterceptingExecutableInvoker.java:93)
>  at 
> app//org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
> at 
> app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
>at 
> app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
> at 
> app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
> at 
> app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:92)
>   at 
> app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:86)
>   at 
> 

[jira] [Assigned] (KAFKA-13105) Expose a method in KafkaConfig to write the configs to a logger

2024-04-16 Thread Johnny Hsu (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Johnny Hsu reassigned KAFKA-13105:
--

Assignee: Johnny Hsu

> Expose a method in KafkaConfig to write the configs to a logger
> ---
>
> Key: KAFKA-13105
> URL: https://issues.apache.org/jira/browse/KAFKA-13105
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Colin McCabe
>Assignee: Johnny Hsu
>Priority: Minor
>  Labels: 4.0-blocker
>
> We should expose a method in KafkaConfig to write the configs to a logger. 
> Currently there is no good way to write them out except creating a new 
> KafkaConfig object with doLog = true, which is unintuitive.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-13105) Expose a method in KafkaConfig to write the configs to a logger

2024-04-15 Thread Johnny Hsu (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17837366#comment-17837366
 ] 

Johnny Hsu commented on KAFKA-13105:


hi [~cmccabe] I would like to work on this if no one is on it now :D

> Expose a method in KafkaConfig to write the configs to a logger
> ---
>
> Key: KAFKA-13105
> URL: https://issues.apache.org/jira/browse/KAFKA-13105
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Colin McCabe
>Priority: Minor
>  Labels: 4.0-blocker
>
> We should expose a method in KafkaConfig to write the configs to a logger. 
> Currently there is no good way to write them out except creating a new 
> KafkaConfig object with doLog = true, which is unintuitive.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16475) Create unit test for TopicImageNode

2024-04-05 Thread Johnny Hsu (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17834261#comment-17834261
 ] 

Johnny Hsu commented on KAFKA-16475:


hi [~cmccabe] 
I am willing to work on this ticket, thanks! 

> Create unit test for TopicImageNode
> ---
>
> Key: KAFKA-16475
> URL: https://issues.apache.org/jira/browse/KAFKA-16475
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Colin McCabe
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-16475) Create unit test for TopicImageNode

2024-04-05 Thread Johnny Hsu (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Johnny Hsu reassigned KAFKA-16475:
--

Assignee: Johnny Hsu

> Create unit test for TopicImageNode
> ---
>
> Key: KAFKA-16475
> URL: https://issues.apache.org/jira/browse/KAFKA-16475
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Colin McCabe
>Assignee: Johnny Hsu
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-13906) Invalid replica state transition

2024-03-29 Thread Johnny Hsu (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Johnny Hsu reassigned KAFKA-13906:
--

Assignee: Johnny Hsu

> Invalid replica state transition
> 
>
> Key: KAFKA-13906
> URL: https://issues.apache.org/jira/browse/KAFKA-13906
> Project: Kafka
>  Issue Type: Bug
>  Components: controller, core, replication
>Affects Versions: 3.1.0, 3.0.0, 3.0.1, 3.2.0, 3.1.1, 3.3.0, 3.0.2, 3.1.2, 
> 3.2.1
>Reporter: Igor Soarez
>Assignee: Johnny Hsu
>Priority: Major
>  Labels: BUG, controller, replication, reproducible-bug
>
> The controller runs into an IllegalStateException when reacting to changes in 
> broker membership status if there are topics that are pending deletion.
>  
> How to reproduce:
>  # Setup cluster with 3 brokers
>  # Create a topic with a partition being led by each broker and produce some 
> data
>  # Kill one of the brokers that is not the controller, and keep that broker 
> down
>  # Delete the topic
>  # Restart the other broker that is not the controller
>  
> Logs and stacktrace:
> {{[2022-05-16 11:53:25,482] ERROR [Controller id=1 epoch=1] Controller 1 
> epoch 1 initiated state change of replica 3 for partition test-topic-2 from 
> ReplicaDeletionSuccessful to ReplicaDeletionIneligible failed 
> (state.change.logger)}}
> {{java.lang.IllegalStateException: Replica 
> [Topic=test-topic,Partition=2,Replica=3] should be in the 
> OfflineReplica,ReplicaDeletionStarted states before moving to 
> ReplicaDeletionIneligible state. Instead it is in ReplicaDeletionSuccessful 
> state}}
> {{        at 
> kafka.controller.ZkReplicaStateMachine.logInvalidTransition(ReplicaStateMachine.scala:442)}}
> {{        at 
> kafka.controller.ZkReplicaStateMachine.$anonfun$doHandleStateChanges$2(ReplicaStateMachine.scala:164)}}
> {{        at 
> kafka.controller.ZkReplicaStateMachine.$anonfun$doHandleStateChanges$2$adapted(ReplicaStateMachine.scala:164)}}
> {{        at scala.collection.immutable.List.foreach(List.scala:333)}}
> {{        at 
> kafka.controller.ZkReplicaStateMachine.doHandleStateChanges(ReplicaStateMachine.scala:164)}}
> {{        at 
> kafka.controller.ZkReplicaStateMachine.$anonfun$handleStateChanges$2(ReplicaStateMachine.scala:112)}}
> {{        at 
> kafka.controller.ZkReplicaStateMachine.$anonfun$handleStateChanges$2$adapted(ReplicaStateMachine.scala:111)}}
> {{        at 
> kafka.utils.Implicits$MapExtensionMethods$.$anonfun$forKeyValue$1(Implicits.scala:62)}}
> {{        at 
> scala.collection.immutable.HashMap.foreachEntry(HashMap.scala:1092)}}
> {{        at 
> kafka.controller.ZkReplicaStateMachine.handleStateChanges(ReplicaStateMachine.scala:111)}}
> {{        at 
> kafka.controller.TopicDeletionManager.failReplicaDeletion(TopicDeletionManager.scala:157)}}
> {{        at 
> kafka.controller.KafkaController.onReplicasBecomeOffline(KafkaController.scala:638)}}
> {{        at 
> kafka.controller.KafkaController.onBrokerFailure(KafkaController.scala:599)}}
> {{        at 
> kafka.controller.KafkaController.processBrokerChange(KafkaController.scala:1623)}}
> {{        at 
> kafka.controller.KafkaController.process(KafkaController.scala:2534)}}
> {{        at 
> kafka.controller.QueuedEvent.process(ControllerEventManager.scala:52)}}
> {{        at 
> kafka.controller.ControllerEventManager$ControllerEventThread.process$1(ControllerEventManager.scala:130)}}
> {{--}}
> {{[2022-05-16 11:53:40,726] ERROR [Controller id=1 epoch=1] Controller 1 
> epoch 1 initiated state change of replica 3 for partition test-topic-2 from 
> ReplicaDeletionSuccessful to OnlineReplica failed (state.change.logger)}}
> {{java.lang.IllegalStateException: Replica 
> [Topic=test-topic,Partition=2,Replica=3] should be in the 
> NewReplica,OnlineReplica,OfflineReplica,ReplicaDeletionIneligible states 
> before moving to OnlineReplica state. Instead it is in 
> ReplicaDeletionSuccessful state}}
> {{        at 
> kafka.controller.ZkReplicaStateMachine.logInvalidTransition(ReplicaStateMachine.scala:442)}}
> {{        at 
> kafka.controller.ZkReplicaStateMachine.$anonfun$doHandleStateChanges$2(ReplicaStateMachine.scala:164)}}
> {{        at 
> kafka.controller.ZkReplicaStateMachine.$anonfun$doHandleStateChanges$2$adapted(ReplicaStateMachine.scala:164)}}
> {{        at scala.collection.immutable.List.foreach(List.scala:333)}}
> {{        at 
> kafka.controller.ZkReplicaStateMachine.doHandleStateChanges(ReplicaStateMachine.scala:164)}}
> {{        at 
> kafka.controller.ZkReplicaStateMachine.$anonfun$handleStateChanges$2(ReplicaStateMachine.scala:112)}}
> {{        at 
> kafka.controller.ZkReplicaStateMachine.$anonfun$handleStateChanges$2$adapted(ReplicaStateMachine.scala:111)}}
> {{        at 
> 

[jira] [Commented] (KAFKA-13906) Invalid replica state transition

2024-03-29 Thread Johnny Hsu (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17832368#comment-17832368
 ] 

Johnny Hsu commented on KAFKA-13906:


hey [~soarez] [~showuon] I would like to work on this, let me assign this to 
myself, thanks for reporting this

> Invalid replica state transition
> 
>
> Key: KAFKA-13906
> URL: https://issues.apache.org/jira/browse/KAFKA-13906
> Project: Kafka
>  Issue Type: Bug
>  Components: controller, core, replication
>Affects Versions: 3.1.0, 3.0.0, 3.0.1, 3.2.0, 3.1.1, 3.3.0, 3.0.2, 3.1.2, 
> 3.2.1
>Reporter: Igor Soarez
>Priority: Major
>  Labels: BUG, controller, replication, reproducible-bug
>
> The controller runs into an IllegalStateException when reacting to changes in 
> broker membership status if there are topics that are pending deletion.
>  
> How to reproduce:
>  # Setup cluster with 3 brokers
>  # Create a topic with a partition being led by each broker and produce some 
> data
>  # Kill one of the brokers that is not the controller, and keep that broker 
> down
>  # Delete the topic
>  # Restart the other broker that is not the controller
>  
> Logs and stacktrace:
> {{[2022-05-16 11:53:25,482] ERROR [Controller id=1 epoch=1] Controller 1 
> epoch 1 initiated state change of replica 3 for partition test-topic-2 from 
> ReplicaDeletionSuccessful to ReplicaDeletionIneligible failed 
> (state.change.logger)}}
> {{java.lang.IllegalStateException: Replica 
> [Topic=test-topic,Partition=2,Replica=3] should be in the 
> OfflineReplica,ReplicaDeletionStarted states before moving to 
> ReplicaDeletionIneligible state. Instead it is in ReplicaDeletionSuccessful 
> state}}
> {{        at 
> kafka.controller.ZkReplicaStateMachine.logInvalidTransition(ReplicaStateMachine.scala:442)}}
> {{        at 
> kafka.controller.ZkReplicaStateMachine.$anonfun$doHandleStateChanges$2(ReplicaStateMachine.scala:164)}}
> {{        at 
> kafka.controller.ZkReplicaStateMachine.$anonfun$doHandleStateChanges$2$adapted(ReplicaStateMachine.scala:164)}}
> {{        at scala.collection.immutable.List.foreach(List.scala:333)}}
> {{        at 
> kafka.controller.ZkReplicaStateMachine.doHandleStateChanges(ReplicaStateMachine.scala:164)}}
> {{        at 
> kafka.controller.ZkReplicaStateMachine.$anonfun$handleStateChanges$2(ReplicaStateMachine.scala:112)}}
> {{        at 
> kafka.controller.ZkReplicaStateMachine.$anonfun$handleStateChanges$2$adapted(ReplicaStateMachine.scala:111)}}
> {{        at 
> kafka.utils.Implicits$MapExtensionMethods$.$anonfun$forKeyValue$1(Implicits.scala:62)}}
> {{        at 
> scala.collection.immutable.HashMap.foreachEntry(HashMap.scala:1092)}}
> {{        at 
> kafka.controller.ZkReplicaStateMachine.handleStateChanges(ReplicaStateMachine.scala:111)}}
> {{        at 
> kafka.controller.TopicDeletionManager.failReplicaDeletion(TopicDeletionManager.scala:157)}}
> {{        at 
> kafka.controller.KafkaController.onReplicasBecomeOffline(KafkaController.scala:638)}}
> {{        at 
> kafka.controller.KafkaController.onBrokerFailure(KafkaController.scala:599)}}
> {{        at 
> kafka.controller.KafkaController.processBrokerChange(KafkaController.scala:1623)}}
> {{        at 
> kafka.controller.KafkaController.process(KafkaController.scala:2534)}}
> {{        at 
> kafka.controller.QueuedEvent.process(ControllerEventManager.scala:52)}}
> {{        at 
> kafka.controller.ControllerEventManager$ControllerEventThread.process$1(ControllerEventManager.scala:130)}}
> {{--}}
> {{[2022-05-16 11:53:40,726] ERROR [Controller id=1 epoch=1] Controller 1 
> epoch 1 initiated state change of replica 3 for partition test-topic-2 from 
> ReplicaDeletionSuccessful to OnlineReplica failed (state.change.logger)}}
> {{java.lang.IllegalStateException: Replica 
> [Topic=test-topic,Partition=2,Replica=3] should be in the 
> NewReplica,OnlineReplica,OfflineReplica,ReplicaDeletionIneligible states 
> before moving to OnlineReplica state. Instead it is in 
> ReplicaDeletionSuccessful state}}
> {{        at 
> kafka.controller.ZkReplicaStateMachine.logInvalidTransition(ReplicaStateMachine.scala:442)}}
> {{        at 
> kafka.controller.ZkReplicaStateMachine.$anonfun$doHandleStateChanges$2(ReplicaStateMachine.scala:164)}}
> {{        at 
> kafka.controller.ZkReplicaStateMachine.$anonfun$doHandleStateChanges$2$adapted(ReplicaStateMachine.scala:164)}}
> {{        at scala.collection.immutable.List.foreach(List.scala:333)}}
> {{        at 
> kafka.controller.ZkReplicaStateMachine.doHandleStateChanges(ReplicaStateMachine.scala:164)}}
> {{        at 
> kafka.controller.ZkReplicaStateMachine.$anonfun$handleStateChanges$2(ReplicaStateMachine.scala:112)}}
> {{        at 
> kafka.controller.ZkReplicaStateMachine.$anonfun$handleStateChanges$2$adapted(ReplicaStateMachine.scala:111)}}
> {{        

[jira] [Commented] (KAFKA-16310) ListOffsets doesn't report the offset with maxTimestamp anymore

2024-03-29 Thread Johnny Hsu (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17832226#comment-17832226
 ] 

Johnny Hsu commented on KAFKA-16310:


thanks [~junrao] for pointing this out and sharing the solution, and thanks 
[~chia7712] [~showuon] for those discussions and revert it for 3.6. 

 
{quote}Since this is a rare operation, paying the decompression overhead is 
fine.

 

Adding a new field in the batch requires record format change, which is a much 
bigger effort. For now, the easiest thing is to add a method in Batch to find 
out offsetOfMaxTimestanp by iterating all records.

Regarding the optimization on the leader side by caching offsetOfMaxTimestanp, 
we could do it. However, my understanding is that listMaxTimestamp is rare and 
I am not sure if it's worth the additional complexity.
{quote}
have go through the comments and had a offline discussion with Chia-Ping, I got 
more context and also feel that we can transfer the workload to 
listMaxTimestamp when clients fetch this since it's rare. What do you think?  

> ListOffsets doesn't report the offset with maxTimestamp anymore
> ---
>
> Key: KAFKA-16310
> URL: https://issues.apache.org/jira/browse/KAFKA-16310
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.7.0
>Reporter: Emanuele Sabellico
>Assignee: Chia-Ping Tsai
>Priority: Blocker
> Fix For: 3.8.0
>
>
> Updated: This is confirmed a regression issue in v3.7.0. 
> The impact of this issue is that when there is a batch containing records 
> with timestamp not in order, the offset of the timestamp will be wrong.(ex: 
> the timestamp for t0 should be mapping to offset 10, but will get offset 12.. 
> etc). It'll cause the time index is putting the wrong offset, so the result 
> will be unexpected. 
> ===
> The last offset is reported instead.
> A test in librdkafka (0081/do_test_ListOffsets) is failing an it's checking 
> that the offset with the max timestamp is the middle one and not the last 
> one. The tests is passing with 3.6.0 and previous versions
> This is the test:
> [https://github.com/confluentinc/librdkafka/blob/a6d85bdbc1023b1a5477b8befe516242c3e182f6/tests/0081-admin.c#L4989]
>  
> there are three messages, with timestamps:
> {noformat}
> t0 + 100
> t0 + 400
> t0 + 250{noformat}
> and indices 0,1,2. 
> then a ListOffsets with RD_KAFKA_OFFSET_SPEC_MAX_TIMESTAMP is done.
> it should return offset 1 but in 3.7.0 and trunk is returning offset 2
> Even after 5 seconds from producing it's still returning 2 as the offset with 
> max timestamp.
> ProduceRequest and ListOffsets were sent to the same broker (2), the leader 
> didn't change.
> {code:java}
> %7|1709134230.019|SEND|0081_admin#producer-3| 
> [thrd:localhost:39951/bootstrap]: localhost:39951/2: Sent ProduceRequest (v7, 
> 206 bytes @ 0, CorrId 2) %7|1709134230.020|RECV|0081_admin#producer-3| 
> [thrd:localhost:39951/bootstrap]: localhost:39951/2: Received ProduceResponse 
> (v7, 95 bytes, CorrId 2, rtt 1.18ms) 
> %7|1709134230.020|MSGSET|0081_admin#producer-3| 
> [thrd:localhost:39951/bootstrap]: localhost:39951/2: 
> rdkafkatest_rnd22e8d8ec45b53f98_do_test_ListOffsets [0]: MessageSet with 3 
> message(s) (MsgId 0, BaseSeq -1) delivered {code}
> {code:java}
> %7|1709134235.021|SEND|0081_admin#producer-2| 
> [thrd:localhost:39951/bootstrap]: localhost:39951/2: Sent ListOffsetsRequest 
> (v7, 103 bytes @ 0, CorrId 7) %7|1709134235.022|RECV|0081_admin#producer-2| 
> [thrd:localhost:39951/bootstrap]: localhost:39951/2: Received 
> ListOffsetsResponse (v7, 88 bytes, CorrId 7, rtt 0.54ms){code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16310) ListOffsets doesn't report the offset with maxTimestamp anymore

2024-03-27 Thread Johnny Hsu (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17831464#comment-17831464
 ] 

Johnny Hsu commented on KAFKA-16310:


{quote}[~chia7712] thanks for the help!

This returns the offset and timestamp corresponding to the record with the 
highest timestamp on the partition. Noted that we should choose the offset of 
the earliest record if the timestamp of the records are the same.

This sounds good to me, thanks! 


{quote}

> ListOffsets doesn't report the offset with maxTimestamp anymore
> ---
>
> Key: KAFKA-16310
> URL: https://issues.apache.org/jira/browse/KAFKA-16310
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.7.0
>Reporter: Emanuele Sabellico
>Assignee: Chia-Ping Tsai
>Priority: Blocker
> Fix For: 3.6.2, 3.8.0, 3.7.1
>
>
> Updated: This is confirmed a regression issue in v3.7.0. 
> The impact of this issue is that when there is a batch containing records 
> with timestamp not in order, the offset of the timestamp will be wrong.(ex: 
> the timestamp for t0 should be mapping to offset 10, but will get offset 12.. 
> etc). It'll cause the time index is putting the wrong offset, so the result 
> will be unexpected. 
> ===
> The last offset is reported instead.
> A test in librdkafka (0081/do_test_ListOffsets) is failing an it's checking 
> that the offset with the max timestamp is the middle one and not the last 
> one. The tests is passing with 3.6.0 and previous versions
> This is the test:
> [https://github.com/confluentinc/librdkafka/blob/a6d85bdbc1023b1a5477b8befe516242c3e182f6/tests/0081-admin.c#L4989]
>  
> there are three messages, with timestamps:
> {noformat}
> t0 + 100
> t0 + 400
> t0 + 250{noformat}
> and indices 0,1,2. 
> then a ListOffsets with RD_KAFKA_OFFSET_SPEC_MAX_TIMESTAMP is done.
> it should return offset 1 but in 3.7.0 and trunk is returning offset 2
> Even after 5 seconds from producing it's still returning 2 as the offset with 
> max timestamp.
> ProduceRequest and ListOffsets were sent to the same broker (2), the leader 
> didn't change.
> {code:java}
> %7|1709134230.019|SEND|0081_admin#producer-3| 
> [thrd:localhost:39951/bootstrap]: localhost:39951/2: Sent ProduceRequest (v7, 
> 206 bytes @ 0, CorrId 2) %7|1709134230.020|RECV|0081_admin#producer-3| 
> [thrd:localhost:39951/bootstrap]: localhost:39951/2: Received ProduceResponse 
> (v7, 95 bytes, CorrId 2, rtt 1.18ms) 
> %7|1709134230.020|MSGSET|0081_admin#producer-3| 
> [thrd:localhost:39951/bootstrap]: localhost:39951/2: 
> rdkafkatest_rnd22e8d8ec45b53f98_do_test_ListOffsets [0]: MessageSet with 3 
> message(s) (MsgId 0, BaseSeq -1) delivered {code}
> {code:java}
> %7|1709134235.021|SEND|0081_admin#producer-2| 
> [thrd:localhost:39951/bootstrap]: localhost:39951/2: Sent ListOffsetsRequest 
> (v7, 103 bytes @ 0, CorrId 7) %7|1709134235.022|RECV|0081_admin#producer-2| 
> [thrd:localhost:39951/bootstrap]: localhost:39951/2: Received 
> ListOffsetsResponse (v7, 88 bytes, CorrId 7, rtt 0.54ms){code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (KAFKA-16310) ListOffsets doesn't report the offset with maxTimestamp anymore

2024-03-27 Thread Johnny Hsu (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17831464#comment-17831464
 ] 

Johnny Hsu edited comment on KAFKA-16310 at 3/27/24 5:01 PM:
-

{quote}[~chia7712] thanks for the help!

This returns the offset and timestamp corresponding to the record with the 
highest timestamp on the partition. Noted that we should choose the offset of 
the earliest record if the timestamp of the records are the same.

This sounds good to me, thanks! {quote}


was (Author: JIRAUSER304478):
{quote}[~chia7712] thanks for the help!

This returns the offset and timestamp corresponding to the record with the 
highest timestamp on the partition. Noted that we should choose the offset of 
the earliest record if the timestamp of the records are the same.

This sounds good to me, thanks! 


{quote}

> ListOffsets doesn't report the offset with maxTimestamp anymore
> ---
>
> Key: KAFKA-16310
> URL: https://issues.apache.org/jira/browse/KAFKA-16310
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.7.0
>Reporter: Emanuele Sabellico
>Assignee: Chia-Ping Tsai
>Priority: Blocker
> Fix For: 3.6.2, 3.8.0, 3.7.1
>
>
> Updated: This is confirmed a regression issue in v3.7.0. 
> The impact of this issue is that when there is a batch containing records 
> with timestamp not in order, the offset of the timestamp will be wrong.(ex: 
> the timestamp for t0 should be mapping to offset 10, but will get offset 12.. 
> etc). It'll cause the time index is putting the wrong offset, so the result 
> will be unexpected. 
> ===
> The last offset is reported instead.
> A test in librdkafka (0081/do_test_ListOffsets) is failing an it's checking 
> that the offset with the max timestamp is the middle one and not the last 
> one. The tests is passing with 3.6.0 and previous versions
> This is the test:
> [https://github.com/confluentinc/librdkafka/blob/a6d85bdbc1023b1a5477b8befe516242c3e182f6/tests/0081-admin.c#L4989]
>  
> there are three messages, with timestamps:
> {noformat}
> t0 + 100
> t0 + 400
> t0 + 250{noformat}
> and indices 0,1,2. 
> then a ListOffsets with RD_KAFKA_OFFSET_SPEC_MAX_TIMESTAMP is done.
> it should return offset 1 but in 3.7.0 and trunk is returning offset 2
> Even after 5 seconds from producing it's still returning 2 as the offset with 
> max timestamp.
> ProduceRequest and ListOffsets were sent to the same broker (2), the leader 
> didn't change.
> {code:java}
> %7|1709134230.019|SEND|0081_admin#producer-3| 
> [thrd:localhost:39951/bootstrap]: localhost:39951/2: Sent ProduceRequest (v7, 
> 206 bytes @ 0, CorrId 2) %7|1709134230.020|RECV|0081_admin#producer-3| 
> [thrd:localhost:39951/bootstrap]: localhost:39951/2: Received ProduceResponse 
> (v7, 95 bytes, CorrId 2, rtt 1.18ms) 
> %7|1709134230.020|MSGSET|0081_admin#producer-3| 
> [thrd:localhost:39951/bootstrap]: localhost:39951/2: 
> rdkafkatest_rnd22e8d8ec45b53f98_do_test_ListOffsets [0]: MessageSet with 3 
> message(s) (MsgId 0, BaseSeq -1) delivered {code}
> {code:java}
> %7|1709134235.021|SEND|0081_admin#producer-2| 
> [thrd:localhost:39951/bootstrap]: localhost:39951/2: Sent ListOffsetsRequest 
> (v7, 103 bytes @ 0, CorrId 7) %7|1709134235.022|RECV|0081_admin#producer-2| 
> [thrd:localhost:39951/bootstrap]: localhost:39951/2: Received 
> ListOffsetsResponse (v7, 88 bytes, CorrId 7, rtt 0.54ms){code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16310) ListOffsets doesn't report the offset with maxTimestamp anymore

2024-03-27 Thread Johnny Hsu (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17831460#comment-17831460
 ] 

Johnny Hsu commented on KAFKA-16310:


The update is in below section:

 

## When the TimestampType is LOG_APPEND_TIME

When the TimestampType is LOG_APPEND_TIME, the timestamp of the records are the 
same. In this case, we should choose the offset of the first record. [This 
path|https://github.com/apache/kafka/blob/6f38fe5e0a6e2fe85fec7cb9adc379061d35ce45/storage/src/main/java/org/apache/kafka/storage/internals/log/LogValidator.java#L294]
 in LogValidator was added to handle this case for non-compressed type, while 
[this 
path|https://github.com/apache/kafka/blob/6f38fe5e0a6e2fe85fec7cb9adc379061d35ce45/storage/src/main/java/org/apache/kafka/storage/internals/log/LogValidator.java#L421]
 in LogValidator was added to handle this case for compressed type.  

I don't have the Confluence account yet, [~chia7712] would you please help 
update the KIP in the wiki? I will send this update to the dev thread for 
visibility. Thanks! 

> ListOffsets doesn't report the offset with maxTimestamp anymore
> ---
>
> Key: KAFKA-16310
> URL: https://issues.apache.org/jira/browse/KAFKA-16310
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.7.0
>Reporter: Emanuele Sabellico
>Assignee: Chia-Ping Tsai
>Priority: Blocker
> Fix For: 3.6.2, 3.8.0, 3.7.1
>
>
> Updated: This is confirmed a regression issue in v3.7.0. 
> The impact of this issue is that when there is a batch containing records 
> with timestamp not in order, the offset of the timestamp will be wrong.(ex: 
> the timestamp for t0 should be mapping to offset 10, but will get offset 12.. 
> etc). It'll cause the time index is putting the wrong offset, so the result 
> will be unexpected. 
> ===
> The last offset is reported instead.
> A test in librdkafka (0081/do_test_ListOffsets) is failing an it's checking 
> that the offset with the max timestamp is the middle one and not the last 
> one. The tests is passing with 3.6.0 and previous versions
> This is the test:
> [https://github.com/confluentinc/librdkafka/blob/a6d85bdbc1023b1a5477b8befe516242c3e182f6/tests/0081-admin.c#L4989]
>  
> there are three messages, with timestamps:
> {noformat}
> t0 + 100
> t0 + 400
> t0 + 250{noformat}
> and indices 0,1,2. 
> then a ListOffsets with RD_KAFKA_OFFSET_SPEC_MAX_TIMESTAMP is done.
> it should return offset 1 but in 3.7.0 and trunk is returning offset 2
> Even after 5 seconds from producing it's still returning 2 as the offset with 
> max timestamp.
> ProduceRequest and ListOffsets were sent to the same broker (2), the leader 
> didn't change.
> {code:java}
> %7|1709134230.019|SEND|0081_admin#producer-3| 
> [thrd:localhost:39951/bootstrap]: localhost:39951/2: Sent ProduceRequest (v7, 
> 206 bytes @ 0, CorrId 2) %7|1709134230.020|RECV|0081_admin#producer-3| 
> [thrd:localhost:39951/bootstrap]: localhost:39951/2: Received ProduceResponse 
> (v7, 95 bytes, CorrId 2, rtt 1.18ms) 
> %7|1709134230.020|MSGSET|0081_admin#producer-3| 
> [thrd:localhost:39951/bootstrap]: localhost:39951/2: 
> rdkafkatest_rnd22e8d8ec45b53f98_do_test_ListOffsets [0]: MessageSet with 3 
> message(s) (MsgId 0, BaseSeq -1) delivered {code}
> {code:java}
> %7|1709134235.021|SEND|0081_admin#producer-2| 
> [thrd:localhost:39951/bootstrap]: localhost:39951/2: Sent ListOffsetsRequest 
> (v7, 103 bytes @ 0, CorrId 7) %7|1709134235.022|RECV|0081_admin#producer-2| 
> [thrd:localhost:39951/bootstrap]: localhost:39951/2: Received 
> ListOffsetsResponse (v7, 88 bytes, CorrId 7, rtt 0.54ms){code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16419) Abstract validateMessagesAndAssignOffsetsCompressed of LogValidator to simply the process

2024-03-25 Thread Johnny Hsu (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Johnny Hsu updated KAFKA-16419:
---
Description: 
Currently in the 
[LogValidator.validateMessagesAndAssignOffsetsCompressed|https://github.com/apache/kafka/blob/51c9b0d0ad408754b1c5883a9c7fcc63a5f57eb8/storage/src/main/java/org/apache/kafka/storage/internals/log/LogValidator.java#L315],
 there are lots of if-else checks based on the `magic` and `CompressionType`, 
which makes the code complicated and increase the difficulties of maintaining. 

The flow of the validation can be separated into 5 steps:
 # IBP validation
 ## whether the compression type is valid for this IBP
 # In-place assignment enablement check
 ## based on the magic value and compression type, decide whether we can do 
in-place assignment
 # batch level validation
 ## based on the batch origin (client, controller, etc) and magic version
 # record level validation
 ## based on whether we can do in-place assignment, choose different iterator 
 ## based on the magic and compression type, do different validation
 # return validated results
 ## based on whether we can do in-place assignment, build the records or assign 
it

This whole flow can be extracted into an interface, and the 
LogValidator.validateMessagesAndAssignOffsetsCompressed can init an 
implementation based on the passed-in records.

The implementation class will have the following fields:
 # magic value
 # source compression type
 # target compression type
 # origin
 # records
 # timestamp type

  was:
Currently in the 
[LogValidator.validateMessagesAndAssignOffsetsCompressed|http://example.com](https://github.com/apache/kafka/blob/51c9b0d0ad408754b1c5883a9c7fcc63a5f57eb8/storage/src/main/java/org/apache/kafka/storage/internals/log/LogValidator.java#L315),
 there are lots of if-else checks based on the `magic` and `CompressionType`, 
which makes the code complicated and increase the difficulties of maintaining. 

The flow of the validation can be separated into 5 steps:
 # IBP validation
 ## whether the compression type is valid for this IBP
 # In-place assignment enablement check
 ## based on the magic value and compression type, decide whether we can do 
in-place assignment
 # batch level validation
 ## based on the batch origin (client, controller, etc) and magic version
 # record level validation
 ## based on whether we can do in-place assignment, choose different iterator 
 ## based on the magic and compression type, do different validation
 # return validated results
 ## based on whether we can do in-place assignment, build the records or assign 
it

This whole flow can be extracted into an interface, and the 
LogValidator.validateMessagesAndAssignOffsetsCompressed can init an 
implementation based on the passed-in records.

The implementation class will have the following fields:
 # magic value
 # source compression type
 # target compression type
 # origin
 # records
 # timestamp type


> Abstract validateMessagesAndAssignOffsetsCompressed of LogValidator to simply 
> the process
> -
>
> Key: KAFKA-16419
> URL: https://issues.apache.org/jira/browse/KAFKA-16419
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Johnny Hsu
>Assignee: Johnny Hsu
>Priority: Major
>
> Currently in the 
> [LogValidator.validateMessagesAndAssignOffsetsCompressed|https://github.com/apache/kafka/blob/51c9b0d0ad408754b1c5883a9c7fcc63a5f57eb8/storage/src/main/java/org/apache/kafka/storage/internals/log/LogValidator.java#L315],
>  there are lots of if-else checks based on the `magic` and `CompressionType`, 
> which makes the code complicated and increase the difficulties of 
> maintaining. 
> The flow of the validation can be separated into 5 steps:
>  # IBP validation
>  ## whether the compression type is valid for this IBP
>  # In-place assignment enablement check
>  ## based on the magic value and compression type, decide whether we can do 
> in-place assignment
>  # batch level validation
>  ## based on the batch origin (client, controller, etc) and magic version
>  # record level validation
>  ## based on whether we can do in-place assignment, choose different iterator 
>  ## based on the magic and compression type, do different validation
>  # return validated results
>  ## based on whether we can do in-place assignment, build the records or 
> assign it
> This whole flow can be extracted into an interface, and the 
> LogValidator.validateMessagesAndAssignOffsetsCompressed can init an 
> implementation based on the passed-in records.
> The implementation class will have the following fields:
>  # magic value
>  # source compression type
>  # target compression type
>  # origin
>  # records
>  # timestamp type



--
This message was sent by Atlassian Jira

[jira] [Updated] (KAFKA-16419) Abstract validateMessagesAndAssignOffsetsCompressed of LogValidator to simply the process

2024-03-25 Thread Johnny Hsu (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Johnny Hsu updated KAFKA-16419:
---
Description: 
Currently in the 
[LogValidator.validateMessagesAndAssignOffsetsCompressed|http://example.com](https://github.com/apache/kafka/blob/51c9b0d0ad408754b1c5883a9c7fcc63a5f57eb8/storage/src/main/java/org/apache/kafka/storage/internals/log/LogValidator.java#L315),
 there are lots of if-else checks based on the `magic` and `CompressionType`, 
which makes the code complicated and increase the difficulties of maintaining. 

The flow of the validation can be separated into 5 steps:
 # IBP validation
 ## whether the compression type is valid for this IBP
 # In-place assignment enablement check
 ## based on the magic value and compression type, decide whether we can do 
in-place assignment
 # batch level validation
 ## based on the batch origin (client, controller, etc) and magic version
 # record level validation
 ## based on whether we can do in-place assignment, choose different iterator 
 ## based on the magic and compression type, do different validation
 # return validated results
 ## based on whether we can do in-place assignment, build the records or assign 
it

This whole flow can be extracted into an interface, and the 
LogValidator.validateMessagesAndAssignOffsetsCompressed can init an 
implementation based on the passed-in records.

The implementation class will have the following fields:
 # magic value
 # source compression type
 # target compression type
 # origin
 # records
 # timestamp type

  was:
Currently in the LogValidator.validateMessagesAndAssignOffsetsCompressed, there 
are lots of if-else checks based on the `magic` and `CompressionType`, which 
makes the code complicated and increase the difficulties of maintaining. 

The flow of the validation can be separated into 5 steps:
 # IBP validation
 ## whether the compression type is valid for this IBP
 # In-place assignment enablement check
 ## based on the magic value and compression type, decide whether we can do 
in-place assignment
 # batch level validation
 ## based on the batch origin (client, controller, etc) and magic version
 # record level validation
 ## based on whether we can do in-place assignment, choose different iterator 
 ## based on the magic and compression type, do different validation
 # return validated results
 ## based on whether we can do in-place assignment, build the records or assign 
it

This whole flow can be extracted into an interface, and the 
LogValidator.validateMessagesAndAssignOffsetsCompressed can init an 
implementation based on the passed-in records.

The implementation class will have the following fields:
 # magic value
 # source compression type
 # target compression type
 # origin
 # records
 # timestamp type


> Abstract validateMessagesAndAssignOffsetsCompressed of LogValidator to simply 
> the process
> -
>
> Key: KAFKA-16419
> URL: https://issues.apache.org/jira/browse/KAFKA-16419
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Johnny Hsu
>Assignee: Johnny Hsu
>Priority: Major
>
> Currently in the 
> [LogValidator.validateMessagesAndAssignOffsetsCompressed|http://example.com](https://github.com/apache/kafka/blob/51c9b0d0ad408754b1c5883a9c7fcc63a5f57eb8/storage/src/main/java/org/apache/kafka/storage/internals/log/LogValidator.java#L315),
>  there are lots of if-else checks based on the `magic` and `CompressionType`, 
> which makes the code complicated and increase the difficulties of 
> maintaining. 
> The flow of the validation can be separated into 5 steps:
>  # IBP validation
>  ## whether the compression type is valid for this IBP
>  # In-place assignment enablement check
>  ## based on the magic value and compression type, decide whether we can do 
> in-place assignment
>  # batch level validation
>  ## based on the batch origin (client, controller, etc) and magic version
>  # record level validation
>  ## based on whether we can do in-place assignment, choose different iterator 
>  ## based on the magic and compression type, do different validation
>  # return validated results
>  ## based on whether we can do in-place assignment, build the records or 
> assign it
> This whole flow can be extracted into an interface, and the 
> LogValidator.validateMessagesAndAssignOffsetsCompressed can init an 
> implementation based on the passed-in records.
> The implementation class will have the following fields:
>  # magic value
>  # source compression type
>  # target compression type
>  # origin
>  # records
>  # timestamp type



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16419) Abstract validateMessagesAndAssignOffsetsCompressed of LogValidator to simply the process

2024-03-25 Thread Johnny Hsu (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Johnny Hsu updated KAFKA-16419:
---
Description: 
Currently in the LogValidator.validateMessagesAndAssignOffsetsCompressed, there 
are lots of if-else checks based on the `magic` and `CompressionType`, which 
makes the code complicated and increase the difficulties of maintaining. 

The flow of the validation can be separated into 5 steps:
 # IBP validation
 ## whether the compression type is valid for this IBP
 # In-place assignment enablement check
 ## based on the magic value and compression type, decide whether we can do 
in-place assignment
 # batch level validation
 ## based on the batch origin (client, controller, etc) and magic version
 # record level validation
 ## based on whether we can do in-place assignment, choose different iterator 
 ## based on the magic and compression type, do different validation
 # return validated results
 ## based on whether we can do in-place assignment, build the records or assign 
it

This whole flow can be extracted into an interface, and the 
LogValidator.validateMessagesAndAssignOffsetsCompressed can init an 
implementation based on the passed-in records.

The implementation class will have the following fields:
 # magic value
 # source compression type
 # target compression type
 # origin
 # records
 # timestamp type

  was:
Currently in the LogValidator.validateMessagesAndAssignOffsetsCompressed, there 
are lots of if-else checks based on the `magic` and `CompressionType`, which 
makes the code complicated and increase the difficulties of maintaining. 

The flow of the validation can be separated into x steps:
 # IBP validation
 ## whether the compression type is valid for this IBP
 # In-place assignment enablement check
 ## based on the magic value and compression type, decide whether we can do 
in-place assignment
 # batch level validation
 ## based on the batch origin (client, controller, etc) and magic version
 # record level validation
 ## based on whether we can do in-place assignment, choose different iterator 
 ## based on the magic and compression type, do different validation
 # return validated results
 ## based on whether we can do in-place assignment, build the records or assign 
it

This whole flow can be extracted into an interface, and the 
LogValidator.validateMessagesAndAssignOffsetsCompressed can init an 
implementation based on the passed-in records.

The implementation class will have the following fields:
 # magic value
 # source compression type
 # target compression type
 # origin
 # records
 # timestamp type


> Abstract validateMessagesAndAssignOffsetsCompressed of LogValidator to simply 
> the process
> -
>
> Key: KAFKA-16419
> URL: https://issues.apache.org/jira/browse/KAFKA-16419
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Johnny Hsu
>Assignee: Johnny Hsu
>Priority: Major
>
> Currently in the LogValidator.validateMessagesAndAssignOffsetsCompressed, 
> there are lots of if-else checks based on the `magic` and `CompressionType`, 
> which makes the code complicated and increase the difficulties of 
> maintaining. 
> The flow of the validation can be separated into 5 steps:
>  # IBP validation
>  ## whether the compression type is valid for this IBP
>  # In-place assignment enablement check
>  ## based on the magic value and compression type, decide whether we can do 
> in-place assignment
>  # batch level validation
>  ## based on the batch origin (client, controller, etc) and magic version
>  # record level validation
>  ## based on whether we can do in-place assignment, choose different iterator 
>  ## based on the magic and compression type, do different validation
>  # return validated results
>  ## based on whether we can do in-place assignment, build the records or 
> assign it
> This whole flow can be extracted into an interface, and the 
> LogValidator.validateMessagesAndAssignOffsetsCompressed can init an 
> implementation based on the passed-in records.
> The implementation class will have the following fields:
>  # magic value
>  # source compression type
>  # target compression type
>  # origin
>  # records
>  # timestamp type



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16419) Abstract validateMessagesAndAssignOffsetsCompressed of LogValidator to simply the process

2024-03-25 Thread Johnny Hsu (Jira)
Johnny Hsu created KAFKA-16419:
--

 Summary: Abstract validateMessagesAndAssignOffsetsCompressed of 
LogValidator to simply the process
 Key: KAFKA-16419
 URL: https://issues.apache.org/jira/browse/KAFKA-16419
 Project: Kafka
  Issue Type: Improvement
Reporter: Johnny Hsu
Assignee: Johnny Hsu


Currently in the LogValidator.validateMessagesAndAssignOffsetsCompressed, there 
are lots of if-else checks based on the `magic` and `CompressionType`, which 
makes the code complicated and increase the difficulties of maintaining. 

The flow of the validation can be separated into x steps:
 # IBP validation
 ## whether the compression type is valid for this IBP
 # In-place assignment enablement check
 ## based on the magic value and compression type, decide whether we can do 
in-place assignment
 # batch level validation
 ## based on the batch origin (client, controller, etc) and magic version
 # record level validation
 ## based on whether we can do in-place assignment, choose different iterator 
 ## based on the magic and compression type, do different validation
 # return validated results
 ## based on whether we can do in-place assignment, build the records or assign 
it

This whole flow can be extracted into an interface, and the 
LogValidator.validateMessagesAndAssignOffsetsCompressed can init an 
implementation based on the passed-in records.

The implementation class will have the following fields:
 # magic value
 # source compression type
 # target compression type
 # origin
 # records
 # timestamp type



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16318) Add javadoc to KafkaMetric

2024-03-25 Thread Johnny Hsu (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17830539#comment-17830539
 ] 

Johnny Hsu commented on KAFKA-16318:


the PR is merged, close this ticket

> Add javadoc to KafkaMetric
> --
>
> Key: KAFKA-16318
> URL: https://issues.apache.org/jira/browse/KAFKA-16318
> Project: Kafka
>  Issue Type: Bug
>  Components: docs
>Reporter: Mickael Maison
>Assignee: Johnny Hsu
>Priority: Major
> Fix For: 3.8.0
>
>
> KafkaMetric is part of the public API but it's missing javadoc describing the 
> class and several of its methods.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16341) Fix un-compressed records

2024-03-21 Thread Johnny Hsu (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829517#comment-17829517
 ] 

Johnny Hsu commented on KAFKA-16341:


on it now, thanks for the reminder 

> Fix un-compressed records
> -
>
> Key: KAFKA-16341
> URL: https://issues.apache.org/jira/browse/KAFKA-16341
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Luke Chen
>Assignee: Johnny Hsu
>Priority: Major
> Fix For: 3.8.0, 3.7.1
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16323) Failing test: fix testRemoteFetchExpiresPerSecMetric

2024-03-19 Thread Johnny Hsu (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17828304#comment-17828304
 ] 

Johnny Hsu commented on KAFKA-16323:


I have added logs to observe and tried to verify some potential causes:
 # check whether it really enters the delayed remote fetch
 # check whether it really enters the onExpire() section
 # check whether it succeeds in marking the metrics

All verified and works as expected. 

 

Thanks to [~showuon], I also tried to add @BeforeAll to the tear down function, 
which remove all metrics before the test. However it still failed. 

Need some more tries to find the root causes...

> Failing test: fix testRemoteFetchExpiresPerSecMetric 
> -
>
> Key: KAFKA-16323
> URL: https://issues.apache.org/jira/browse/KAFKA-16323
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Johnny Hsu
>Assignee: Johnny Hsu
>Priority: Major
>  Labels: test-failure
>
> Refer to 
> [https://ci-builds.apache.org/job/Kafka/job/kafka/job/trunk/2685/testReport/junit/kafka.server/ReplicaManagerTest/Build___JDK_21_and_Scala_2_13___testRemoteFetchExpiresPerSecMetric__/]
> This test is failing, and this ticket aims to address this 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16380) Rename the shallowOffsetOfMaxTimestamp in LogSegment

2024-03-19 Thread Johnny Hsu (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Johnny Hsu updated KAFKA-16380:
---
Description: 
When working on #KAFKA-16341, we found `shallowOffsetOfMaxTimestamp` also 
appears in LogSegment, which is a confusing name since it actually represent 
the record level offset, instead of a record batch level offset. 

Thus, this variable name should be renamed as well. 

More details can be found in 
[https://github.com/apache/kafka/pull/15476.|https://github.com/apache/kafka/pull/15476]

  was:
When working on #KAFKA-16341, we found `shallowOffsetOfMaxTimestamp` also 
appears in LogSegment, which is a confusing name since it actually represent 
the record level offset, instead of a record batch level offset. 

Thus, this variable name should be renamed as well. 

More details can be found in 


> Rename the shallowOffsetOfMaxTimestamp in LogSegment
> 
>
> Key: KAFKA-16380
> URL: https://issues.apache.org/jira/browse/KAFKA-16380
> Project: Kafka
>  Issue Type: Bug
>Reporter: Johnny Hsu
>Assignee: Johnny Hsu
>Priority: Minor
>
> When working on #KAFKA-16341, we found `shallowOffsetOfMaxTimestamp` also 
> appears in LogSegment, which is a confusing name since it actually represent 
> the record level offset, instead of a record batch level offset. 
> Thus, this variable name should be renamed as well. 
> More details can be found in 
> [https://github.com/apache/kafka/pull/15476.|https://github.com/apache/kafka/pull/15476]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16380) Rename the shallowOffsetOfMaxTimestamp in LogSegment

2024-03-19 Thread Johnny Hsu (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Johnny Hsu updated KAFKA-16380:
---
Description: 
When working on #KAFKA-16341, we found `shallowOffsetOfMaxTimestamp` also 
appears in LogSegment, which is a confusing name since it actually represent 
the record level offset, instead of a record batch level offset. 

Thus, this variable name should be renamed as well. 

More details can be found in 

  was:When working on #KAFKA-16341, we found 


> Rename the shallowOffsetOfMaxTimestamp in LogSegment
> 
>
> Key: KAFKA-16380
> URL: https://issues.apache.org/jira/browse/KAFKA-16380
> Project: Kafka
>  Issue Type: Bug
>Reporter: Johnny Hsu
>Assignee: Johnny Hsu
>Priority: Minor
>
> When working on #KAFKA-16341, we found `shallowOffsetOfMaxTimestamp` also 
> appears in LogSegment, which is a confusing name since it actually represent 
> the record level offset, instead of a record batch level offset. 
> Thus, this variable name should be renamed as well. 
> More details can be found in 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16380) Rename the shallowOffsetOfMaxTimestamp in LogSegment

2024-03-19 Thread Johnny Hsu (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Johnny Hsu updated KAFKA-16380:
---
Description: When working on #KAFKA-16341, we found 

> Rename the shallowOffsetOfMaxTimestamp in LogSegment
> 
>
> Key: KAFKA-16380
> URL: https://issues.apache.org/jira/browse/KAFKA-16380
> Project: Kafka
>  Issue Type: Bug
>Reporter: Johnny Hsu
>Assignee: Johnny Hsu
>Priority: Minor
>
> When working on #KAFKA-16341, we found 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16380) Rename the shallowOffsetOfMaxTimestamp in LogSegment

2024-03-19 Thread Johnny Hsu (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17828302#comment-17828302
 ] 

Johnny Hsu commented on KAFKA-16380:


[~chia7712] sure, let me update that.

BTW this is already addressed in 
[https://github.com/apache/kafka/pull/15476.|https://github.com/apache/kafka/pull/15476]

Will link it here. 

> Rename the shallowOffsetOfMaxTimestamp in LogSegment
> 
>
> Key: KAFKA-16380
> URL: https://issues.apache.org/jira/browse/KAFKA-16380
> Project: Kafka
>  Issue Type: Bug
>Reporter: Johnny Hsu
>Assignee: Johnny Hsu
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16383) fix flaky test IdentityReplicationIntegrationTest.testReplicateFromLatest()

2024-03-18 Thread Johnny Hsu (Jira)
Johnny Hsu created KAFKA-16383:
--

 Summary: fix flaky test 
IdentityReplicationIntegrationTest.testReplicateFromLatest()
 Key: KAFKA-16383
 URL: https://issues.apache.org/jira/browse/KAFKA-16383
 Project: Kafka
  Issue Type: Bug
Reporter: Johnny Hsu
Assignee: Johnny Hsu


Build link: 
[https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-15463/4/testReport/junit/org.apache.kafka.connect.mirror.integration/IdentityReplicationIntegrationTest/Build___JDK_11_and_Scala_2_13___testReplicateFromLatest__/]

 

This test failed in build in several PR, which is flaky



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16381) We should use a lock to protect the config getter in KafkaMetric

2024-03-17 Thread Johnny Hsu (Jira)
Johnny Hsu created KAFKA-16381:
--

 Summary: We should use a lock to protect the config getter in 
KafkaMetric
 Key: KAFKA-16381
 URL: https://issues.apache.org/jira/browse/KAFKA-16381
 Project: Kafka
  Issue Type: Bug
Reporter: Johnny Hsu
Assignee: Johnny Hsu


In KafkaMetirc.java, the getter is 

```
@Override
public MetricName metricName() {
return this.metricName;
}
```

and there is a setter 

```
public void config(MetricConfig config) {
    synchronized (lock) {
    this.config = config;
  }
}
```

Since it's possible to set and get in the mean time, we should have lock in the 
getter as well



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16380) Rename the shallowOffsetOfMaxTimestamp in LogSegment

2024-03-15 Thread Johnny Hsu (Jira)
Johnny Hsu created KAFKA-16380:
--

 Summary: Rename the shallowOffsetOfMaxTimestamp in LogSegment
 Key: KAFKA-16380
 URL: https://issues.apache.org/jira/browse/KAFKA-16380
 Project: Kafka
  Issue Type: Bug
Reporter: Johnny Hsu
Assignee: Johnny Hsu






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-16348) Fix flaky TopicCommandIntegrationTest.testDescribeUnderReplicatedPartitionsWhenReassignmentIsInProgress

2024-03-06 Thread Johnny Hsu (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Johnny Hsu reassigned KAFKA-16348:
--

Assignee: Johnny Hsu

> Fix flaky 
> TopicCommandIntegrationTest.testDescribeUnderReplicatedPartitionsWhenReassignmentIsInProgress
> ---
>
> Key: KAFKA-16348
> URL: https://issues.apache.org/jira/browse/KAFKA-16348
> Project: Kafka
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Johnny Hsu
>Priority: Minor
>
> {code:java}
> Gradle Test Run :tools:test > Gradle Test Executor 36 > 
> TopicCommandIntegrationTest > 
> testDescribeUnderReplicatedPartitionsWhenReassignmentIsInProgress(String) > 
> testDescribeUnderReplicatedPartitionsWhenReassignmentIsInProgress(String).kraft
>  FAILED
>     org.opentest4j.AssertionFailedError: --under-replicated-partitions 
> shouldn't return anything: 'Topic: 
> testDescribeUnderReplicatedPartitionsWhenReassignmentIsInProgress-4l8dkZ6JT2  
> Partition: 0    Leader: 3       Replicas: 0,3   Isr: 3' ==> expected: <> but 
> was:  testDescribeUnderReplicatedPartitionsWhenReassignmentIsInProgress-4l8dkZ6JT2  
> Partition: 0    Leader: 3       Replicas: 0,3   Isr: 3>
>         at 
> app//org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
>         at 
> app//org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
>         at 
> app//org.junit.jupiter.api.AssertEquals.failNotEqual(AssertEquals.java:197)
>         at 
> app//org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:182)
>         at 
> app//org.junit.jupiter.api.Assertions.assertEquals(Assertions.java:1156)
>         at 
> app//org.apache.kafka.tools.TopicCommandIntegrationTest.testDescribeUnderReplicatedPartitionsWhenReassignmentIsInProgress(TopicCommandIntegrationTest.java:827)
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-16341) Fix un-compressed records

2024-03-05 Thread Johnny Hsu (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Johnny Hsu reassigned KAFKA-16341:
--

Assignee: Johnny Hsu

> Fix un-compressed records
> -
>
> Key: KAFKA-16341
> URL: https://issues.apache.org/jira/browse/KAFKA-16341
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Luke Chen
>Assignee: Johnny Hsu
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-16318) Add javadoc to KafkaMetric

2024-03-04 Thread Johnny Hsu (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Johnny Hsu reassigned KAFKA-16318:
--

Assignee: Johnny Hsu

> Add javadoc to KafkaMetric
> --
>
> Key: KAFKA-16318
> URL: https://issues.apache.org/jira/browse/KAFKA-16318
> Project: Kafka
>  Issue Type: Bug
>  Components: docs
>Reporter: Mickael Maison
>Assignee: Johnny Hsu
>Priority: Major
>
> KafkaMetric is part of the public API but it's missing javadoc describing the 
> class and several of its methods.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16323) Failing test: fix testRemoteFetchExpiresPerSecMetric

2024-03-03 Thread Johnny Hsu (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Johnny Hsu updated KAFKA-16323:
---
Priority: Major  (was: Minor)

> Failing test: fix testRemoteFetchExpiresPerSecMetric 
> -
>
> Key: KAFKA-16323
> URL: https://issues.apache.org/jira/browse/KAFKA-16323
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Johnny Hsu
>Assignee: Johnny Hsu
>Priority: Major
>  Labels: test-failure
>
> Refer to 
> [https://ci-builds.apache.org/job/Kafka/job/kafka/job/trunk/2685/testReport/junit/kafka.server/ReplicaManagerTest/Build___JDK_21_and_Scala_2_13___testRemoteFetchExpiresPerSecMetric__/]
> This test is failing, and this ticket aims to address this 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16323) Failing test: fix testRemoteFetchExpiresPerSecMetric

2024-03-03 Thread Johnny Hsu (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Johnny Hsu updated KAFKA-16323:
---
Labels: test-failure  (was: )

> Failing test: fix testRemoteFetchExpiresPerSecMetric 
> -
>
> Key: KAFKA-16323
> URL: https://issues.apache.org/jira/browse/KAFKA-16323
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Johnny Hsu
>Assignee: Johnny Hsu
>Priority: Minor
>  Labels: test-failure
>
> Refer to 
> [https://ci-builds.apache.org/job/Kafka/job/kafka/job/trunk/2685/testReport/junit/kafka.server/ReplicaManagerTest/Build___JDK_21_and_Scala_2_13___testRemoteFetchExpiresPerSecMetric__/]
> This test is failing, and this ticket aims to address this 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16323) Failing test: fix testRemoteFetchExpiresPerSecMetric

2024-03-03 Thread Johnny Hsu (Jira)
Johnny Hsu created KAFKA-16323:
--

 Summary: Failing test: fix testRemoteFetchExpiresPerSecMetric 
 Key: KAFKA-16323
 URL: https://issues.apache.org/jira/browse/KAFKA-16323
 Project: Kafka
  Issue Type: Bug
  Components: core
Reporter: Johnny Hsu
Assignee: Johnny Hsu


Refer to 
[https://ci-builds.apache.org/job/Kafka/job/kafka/job/trunk/2685/testReport/junit/kafka.server/ReplicaManagerTest/Build___JDK_21_and_Scala_2_13___testRemoteFetchExpiresPerSecMetric__/]

This test is failing, and this ticket aims to address this 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)