[jira] [Created] (KAFKA-8141) Flaky Test FetchRequestDownConversionConfigTest#testV1FetchWithDownConversionDisabled
Matthias J. Sax created KAFKA-8141: -- Summary: Flaky Test FetchRequestDownConversionConfigTest#testV1FetchWithDownConversionDisabled Key: KAFKA-8141 URL: https://issues.apache.org/jira/browse/KAFKA-8141 Project: Kafka Issue Type: Bug Components: core, unit tests Affects Versions: 2.2.0 Reporter: Matthias J. Sax Fix For: 2.3.0, 2.2.1 [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/80/testReport/junit/kafka.server/FetchRequestDownConversionConfigTest/testV1FetchWithDownConversionDisabled/] {quote}java.lang.AssertionError: Partition [__consumer_offsets,0] metadata not propagated after 15000 ms at kafka.utils.TestUtils$.fail(TestUtils.scala:381) at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at scala.collection.immutable.Range.foreach(Range.scala:158) at scala.collection.TraversableLike.map(TraversableLike.scala:237) at scala.collection.TraversableLike.map$(TraversableLike.scala:230) at scala.collection.AbstractTraversable.map(Traversable.scala:108) at kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at kafka.utils.TestUtils$.createOffsetsTopic(TestUtils.scala:375) at kafka.api.IntegrationTestHarness.doSetup(IntegrationTestHarness.scala:95) at kafka.api.IntegrationTestHarness.setUp(IntegrationTestHarness.scala:73){quote} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-7989) Flaky Test RequestQuotaTest#testResponseThrottleTimeWhenBothFetchAndRequestQuotasViolated
[ https://issues.apache.org/jira/browse/KAFKA-7989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797726#comment-16797726 ] Matthias J. Sax commented on KAFKA-7989: One more: [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/80/testReport/junit/kafka.server/RequestQuotaTest/testResponseThrottleTimeWhenBothFetchAndRequestQuotasViolated/] > Flaky Test > RequestQuotaTest#testResponseThrottleTimeWhenBothFetchAndRequestQuotasViolated > - > > Key: KAFKA-7989 > URL: https://issues.apache.org/jira/browse/KAFKA-7989 > Project: Kafka > Issue Type: Bug > Components: core, unit tests >Affects Versions: 2.2.0 >Reporter: Matthias J. Sax >Assignee: Anna Povzner >Priority: Critical > Labels: flaky-test > Fix For: 2.3.0, 2.2.1 > > > To get stable nightly builds for `2.2` release, I create tickets for all > observed test failures. > [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/27/] > {quote}java.util.concurrent.ExecutionException: java.lang.AssertionError: > Throttle time metrics for consumer quota not updated: > small-quota-consumer-client at > java.util.concurrent.FutureTask.report(FutureTask.java:122) at > java.util.concurrent.FutureTask.get(FutureTask.java:206) at > kafka.server.RequestQuotaTest.$anonfun$waitAndCheckResults$1(RequestQuotaTest.scala:415) > at scala.collection.immutable.List.foreach(List.scala:392) at > scala.collection.generic.TraversableForwarder.foreach(TraversableForwarder.scala:38) > at > scala.collection.generic.TraversableForwarder.foreach$(TraversableForwarder.scala:38) > at scala.collection.mutable.ListBuffer.foreach(ListBuffer.scala:47) at > kafka.server.RequestQuotaTest.waitAndCheckResults(RequestQuotaTest.scala:413) > at > kafka.server.RequestQuotaTest.testResponseThrottleTimeWhenBothFetchAndRequestQuotasViolated(RequestQuotaTest.scala:134){quote} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (KAFKA-8140) Flaky Test SaslSslAdminClientIntegrationTest#testDescribeAndAlterConfigs
Matthias J. Sax created KAFKA-8140: -- Summary: Flaky Test SaslSslAdminClientIntegrationTest#testDescribeAndAlterConfigs Key: KAFKA-8140 URL: https://issues.apache.org/jira/browse/KAFKA-8140 Project: Kafka Issue Type: Bug Components: admin, unit tests Affects Versions: 2.2.0 Reporter: Matthias J. Sax Fix For: 2.3.0, 2.2.1 [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/80/testReport/junit/kafka.api/SaslSslAdminClientIntegrationTest/testDescribeAndAlterConfigs/] {quote}java.lang.IllegalArgumentException: Could not find a 'KafkaServer' or 'sasl_ssl.KafkaServer' entry in the JAAS configuration. System property 'java.security.auth.login.config' is not set at org.apache.kafka.common.security.JaasContext.defaultContext(JaasContext.java:133) at org.apache.kafka.common.security.JaasContext.load(JaasContext.java:98) at org.apache.kafka.common.security.JaasContext.loadServerContext(JaasContext.java:70) at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:121) at org.apache.kafka.common.network.ChannelBuilders.serverChannelBuilder(ChannelBuilders.java:85) at kafka.network.Processor.(SocketServer.scala:694) at kafka.network.SocketServer.newProcessor(SocketServer.scala:344) at kafka.network.SocketServer.$anonfun$addDataPlaneProcessors$1(SocketServer.scala:253) at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:158) at kafka.network.SocketServer.addDataPlaneProcessors(SocketServer.scala:252) at kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1(SocketServer.scala:216) at kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1$adapted(SocketServer.scala:214) at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at kafka.network.SocketServer.createDataPlaneAcceptorsAndProcessors(SocketServer.scala:214) at kafka.network.SocketServer.startup(SocketServer.scala:114) at kafka.server.KafkaServer.startup(KafkaServer.scala:253) at kafka.utils.TestUtils$.createServer(TestUtils.scala:140) at kafka.integration.KafkaServerTestHarness.$anonfun$setUp$1(KafkaServerTestHarness.scala:101) at scala.collection.Iterator.foreach(Iterator.scala:941) at scala.collection.Iterator.foreach$(Iterator.scala:941) at scala.collection.AbstractIterator.foreach(Iterator.scala:1429) at scala.collection.IterableLike.foreach(IterableLike.scala:74) at scala.collection.IterableLike.foreach$(IterableLike.scala:73) at scala.collection.AbstractIterable.foreach(Iterable.scala:56) at kafka.integration.KafkaServerTestHarness.setUp(KafkaServerTestHarness.scala:100) at kafka.api.IntegrationTestHarness.doSetup(IntegrationTestHarness.scala:81) at kafka.api.IntegrationTestHarness.setUp(IntegrationTestHarness.scala:73) at kafka.api.AdminClientIntegrationTest.setUp(AdminClientIntegrationTest.scala:79) at kafka.api.SaslSslAdminClientIntegrationTest.setUp(SaslSslAdminClientIntegrationTest.scala:64){quote} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (KAFKA-8139) Flaky Test SaslSslAdminClientIntegrationTest#testMetadataRefresh
Matthias J. Sax created KAFKA-8139: -- Summary: Flaky Test SaslSslAdminClientIntegrationTest#testMetadataRefresh Key: KAFKA-8139 URL: https://issues.apache.org/jira/browse/KAFKA-8139 Project: Kafka Issue Type: Bug Components: admin, unit tests Affects Versions: 2.2.0 Reporter: Matthias J. Sax Fix For: 2.3.0, 2.2.1 [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/80/testReport/junit/kafka.api/SaslSslAdminClientIntegrationTest/testMetadataRefresh/] {quote}org.junit.runners.model.TestTimedOutException: test timed out after 12 milliseconds at java.lang.Object.wait(Native Method) at java.util.concurrent.ForkJoinTask.externalAwaitDone(ForkJoinTask.java:334) at java.util.concurrent.ForkJoinTask.doJoin(ForkJoinTask.java:391) at java.util.concurrent.ForkJoinTask.join(ForkJoinTask.java:719) at scala.collection.parallel.ForkJoinTasks$WrappedTask.sync(Tasks.scala:379) at scala.collection.parallel.ForkJoinTasks$WrappedTask.sync$(Tasks.scala:379) at scala.collection.parallel.AdaptiveWorkStealingForkJoinTasks$WrappedTask.sync(Tasks.scala:440) at scala.collection.parallel.ForkJoinTasks.executeAndWaitResult(Tasks.scala:423) at scala.collection.parallel.ForkJoinTasks.executeAndWaitResult$(Tasks.scala:416) at scala.collection.parallel.ForkJoinTaskSupport.executeAndWaitResult(TaskSupport.scala:60) at scala.collection.parallel.ExecutionContextTasks.executeAndWaitResult(Tasks.scala:555) at scala.collection.parallel.ExecutionContextTasks.executeAndWaitResult$(Tasks.scala:555) at scala.collection.parallel.ExecutionContextTaskSupport.executeAndWaitResult(TaskSupport.scala:84) at scala.collection.parallel.ParIterableLike.foreach(ParIterableLike.scala:465) at scala.collection.parallel.ParIterableLike.foreach$(ParIterableLike.scala:464) at scala.collection.parallel.mutable.ParArray.foreach(ParArray.scala:58) at kafka.utils.TestUtils$.shutdownServers(TestUtils.scala:201) at kafka.integration.KafkaServerTestHarness.tearDown(KafkaServerTestHarness.scala:113) at kafka.api.IntegrationTestHarness.tearDown(IntegrationTestHarness.scala:134) at kafka.api.AdminClientIntegrationTest.tearDown(AdminClientIntegrationTest.scala:87) at kafka.api.SaslSslAdminClientIntegrationTest.tearDown(SaslSslAdminClientIntegrationTest.scala:90){quote} STDOUT {quote}[2019-03-20 16:30:35,739] ERROR [KafkaServer id=0] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer:159) java.lang.IllegalArgumentException: Could not find a 'KafkaServer' or 'sasl_ssl.KafkaServer' entry in the JAAS configuration. System property 'java.security.auth.login.config' is not set at org.apache.kafka.common.security.JaasContext.defaultContext(JaasContext.java:133) at org.apache.kafka.common.security.JaasContext.load(JaasContext.java:98) at org.apache.kafka.common.security.JaasContext.loadServerContext(JaasContext.java:70) at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:121) at org.apache.kafka.common.network.ChannelBuilders.serverChannelBuilder(ChannelBuilders.java:85) at kafka.network.Processor.(SocketServer.scala:694) at kafka.network.SocketServer.newProcessor(SocketServer.scala:344) at kafka.network.SocketServer.$anonfun$addDataPlaneProcessors$1(SocketServer.scala:253) at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:158) at kafka.network.SocketServer.addDataPlaneProcessors(SocketServer.scala:252) at kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1(SocketServer.scala:216) at kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1$adapted(SocketServer.scala:214) at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at kafka.network.SocketServer.createDataPlaneAcceptorsAndProcessors(SocketServer.scala:214) at kafka.network.SocketServer.startup(SocketServer.scala:114) at kafka.server.KafkaServer.startup(KafkaServer.scala:253) at kafka.utils.TestUtils$.createServer(TestUtils.scala:140) at kafka.integration.KafkaServerTestHarness.$anonfun$setUp$1(KafkaServerTestHarness.scala:101) at scala.collection.Iterator.foreach(Iterator.scala:941) at scala.collection.Iterator.foreach$(Iterator.scala:941) at scala.collection.AbstractIterator.foreach(Iterator.scala:1429) at scala.collection.IterableLike.foreach(IterableLike.scala:74) at scala.collection.IterableLike.foreach$(IterableLike.scala:73) at scala.collection.AbstractIterable.foreach(Iterable.scala:56) at kafka.integration.KafkaServerTestHarness.setUp(KafkaServerTestHarness.scala:100) at kafka.api.IntegrationTestHarness.doSetup(IntegrationTestHarness.scala:81) at
[jira] [Created] (KAFKA-8138) Flaky Test PlaintextConsumerTest#testFetchRecordLargerThanFetchMaxBytes
Matthias J. Sax created KAFKA-8138: -- Summary: Flaky Test PlaintextConsumerTest#testFetchRecordLargerThanFetchMaxBytes Key: KAFKA-8138 URL: https://issues.apache.org/jira/browse/KAFKA-8138 Project: Kafka Issue Type: Bug Components: clients, unit tests Affects Versions: 2.2.0 Reporter: Matthias J. Sax Fix For: 2.3.0, 2.2.1 [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/80/testReport/junit/kafka.api/PlaintextConsumerTest/testFetchRecordLargerThanFetchMaxBytes/] {quote}java.lang.AssertionError: Partition [topic,0] metadata not propagated after 15000 ms at kafka.utils.TestUtils$.fail(TestUtils.scala:381) at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at scala.collection.immutable.Range.foreach(Range.scala:158) at scala.collection.TraversableLike.map(TraversableLike.scala:237) at scala.collection.TraversableLike.map$(TraversableLike.scala:230) at scala.collection.AbstractTraversable.map(Traversable.scala:108) at kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at kafka.integration.KafkaServerTestHarness.createTopic(KafkaServerTestHarness.scala:125) at kafka.api.BaseConsumerTest.setUp(BaseConsumerTest.scala:69){quote} STDOUT (truncated) {quote}[2019-03-20 16:10:19,759] ERROR [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition. [2019-03-20 16:10:19,760] ERROR [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition. [2019-03-20 16:10:19,963] ERROR [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition topic-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition. [2019-03-20 16:10:19,964] ERROR [ReplicaFetcher replicaId=1, leaderId=2, fetcherId=0] Error for partition topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition. [2019-03-20 16:10:19,975] ERROR [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition.{quote} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (KAFKA-8137) Flaky Test LegacyAdminClientTest#testOffsetsForTimesWhenOffsetNotFound
Matthias J. Sax created KAFKA-8137: -- Summary: Flaky Test LegacyAdminClientTest#testOffsetsForTimesWhenOffsetNotFound Key: KAFKA-8137 URL: https://issues.apache.org/jira/browse/KAFKA-8137 Project: Kafka Issue Type: Bug Components: admin, unit tests Affects Versions: 2.2.0 Reporter: Matthias J. Sax Fix For: 2.3.0, 2.2.1 [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/80/testReport/junit/kafka.api/LegacyAdminClientTest/testOffsetsForTimesWhenOffsetNotFound/] {quote}java.lang.AssertionError: Partition [topic,0] metadata not propagated after 15000 ms at kafka.utils.TestUtils$.fail(TestUtils.scala:381) at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at scala.collection.immutable.Range.foreach(Range.scala:158) at scala.collection.TraversableLike.map(TraversableLike.scala:237) at scala.collection.TraversableLike.map$(TraversableLike.scala:230) at scala.collection.AbstractTraversable.map(Traversable.scala:108) at kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at kafka.integration.KafkaServerTestHarness.createTopic(KafkaServerTestHarness.scala:125) at kafka.api.LegacyAdminClientTest.setUp(LegacyAdminClientTest.scala:73){quote} STDOUT {quote}[2019-03-20 16:28:10,089] ERROR [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition. [2019-03-20 16:28:10,093] ERROR [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition. [2019-03-20 16:28:10,303] ERROR [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition. [2019-03-20 16:28:10,303] ERROR [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition. [2019-03-20 16:28:14,493] ERROR [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition. [2019-03-20 16:28:14,724] ERROR [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition topic-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition. [2019-03-20 16:28:21,388] ERROR [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition. [2019-03-20 16:28:21,394] ERROR [ReplicaFetcher replicaId=1, leaderId=2, fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition. [2019-03-20 16:29:48,224] ERROR [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition. [2019-03-20 16:29:48,249] ERROR [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition. [2019-03-20 16:29:49,255] ERROR [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition. [2019-03-20 16:29:49,256] ERROR [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Error for partition __consumer_offsets-0 at offset 0
[jira] [Created] (KAFKA-8136) Flaky Test MetadataRequestTest#testAllTopicsRequest
Matthias J. Sax created KAFKA-8136: -- Summary: Flaky Test MetadataRequestTest#testAllTopicsRequest Key: KAFKA-8136 URL: https://issues.apache.org/jira/browse/KAFKA-8136 Project: Kafka Issue Type: Bug Components: core, unit tests Affects Versions: 2.2.0 Reporter: Matthias J. Sax Fix For: 2.3.0, 2.2.1 [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/78/testReport/junit/kafka.server/MetadataRequestTest/testAllTopicsRequest/] {quote}java.lang.AssertionError: Partition [t2,0] metadata not propagated after 15000 ms at kafka.utils.TestUtils$.fail(TestUtils.scala:381) at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at scala.collection.immutable.Range.foreach(Range.scala:158) at scala.collection.TraversableLike.map(TraversableLike.scala:237) at scala.collection.TraversableLike.map$(TraversableLike.scala:230) at scala.collection.AbstractTraversable.map(Traversable.scala:108) at kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at kafka.integration.KafkaServerTestHarness.createTopic(KafkaServerTestHarness.scala:125) at kafka.server.MetadataRequestTest.testAllTopicsRequest(MetadataRequestTest.scala:201){quote} STDOUT {quote}[2019-03-20 00:05:17,921] ERROR [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition replicaDown-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition. [2019-03-20 00:05:23,520] WARN Unable to read additional data from client sessionid 0x10033b4d8c6, likely client has closed socket (org.apache.zookeeper.server.NIOServerCnxn:376) [2019-03-20 00:05:23,794] ERROR [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition testAutoCreate_Topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition. [2019-03-20 00:05:30,735] ERROR [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition __consumer_offsets-2 at offset 0 (kafka.server.ReplicaFetcherThread:76) org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition. [2019-03-20 00:05:31,156] ERROR [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition notInternal-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition. [2019-03-20 00:05:31,156] ERROR [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Error for partition notInternal-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition. [2019-03-20 00:05:37,817] WARN Unable to read additional data from client sessionid 0x10033b51c370002, likely client has closed socket (org.apache.zookeeper.server.NIOServerCnxn:376) [2019-03-20 00:05:51,571] ERROR [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition t1-2 at offset 0 (kafka.server.ReplicaFetcherThread:76) org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition. [2019-03-20 00:05:51,571] ERROR [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition t1-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition. [2019-03-20 00:06:22,153] ERROR [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition t1-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition. [2019-03-20 00:06:22,622] ERROR [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition t2-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition. [2019-03-20 00:06:35,106] ERROR [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Error for partition t1-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition. [2019-03-20 00:06:35,129] ERROR [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition t1-1 at offset 0
[jira] [Commented] (KAFKA-8131) Add --version parameter to command line help outputs & docs
[ https://issues.apache.org/jira/browse/KAFKA-8131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797626#comment-16797626 ] ASF GitHub Bot commented on KAFKA-8131: --- soenkeliebau commented on pull request #6481: KAFKA-8131: Moved --version argument implementation from kafka-run-class.sh to CommandLineUtils to include documentation in command line help URL: https://github.com/apache/kafka/pull/6481 After some further investigation, I've taken the liberty of refactoring the implementation of the --version option and moved it into the default command options. This has the benefit of automatically including it in the usage output of the command line tools without having to actually touch any of them. ### Committer Checklist (excluded from commit message) - [ ] Verify design and implementation - [ ] Verify test coverage and CI build status - [ ] Verify documentation (including upgrade notes) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org > Add --version parameter to command line help outputs & docs > --- > > Key: KAFKA-8131 > URL: https://issues.apache.org/jira/browse/KAFKA-8131 > Project: Kafka > Issue Type: Improvement > Components: tools >Reporter: Sönke Liebau >Assignee: Sönke Liebau >Priority: Minor > > KAFKA-2061 added the --version flag to kafka-run-class.sh which prints the > Kafka version to the commandline. > As this is in kafka-run-class.sh this will effectively work for all > commandline tools that use this file to run a class, so it should be added to > the help output of these scripts as well. A quick grep leads me to these > suspects: > * connect-distributed.sh > * connect-standalone.sh > * kafka-acls.sh > * kafka-broker-api-versions.sh > * kafka-configs.sh > * kafka-console-consumer.sh > * kafka-console-producer.sh > * kafka-consumer-groups.sh > * kafka-consumer-perf-test.sh > * kafka-delegation-tokens.sh > * kafka-delete-records.sh > * kafka-dump-log.sh > * kafka-log-dirs.sh > * kafka-mirror-maker.sh > * kafka-preferred-replica-election.sh > * kafka-producer-perf-test.sh > * kafka-reassign-partitions.sh > * kafka-replica-verification.sh > * kafka-server-start.sh > * kafka-streams-application-reset.sh > * kafka-topics.sh > * kafka-verifiable-consumer.sh > * kafka-verifiable-producer.sh > * trogdor.sh > * zookeeper-security-migration.sh > * zookeeper-server-start.sh > * zookeeper-shell.sh > Currently this parameter is not documented at all, neither in the output nor > in the official docs. > I'd propose to add it to the docs as well as part of this issue, I'll look > for a suitable place. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-8131) Add --version parameter to command line help outputs & docs
[ https://issues.apache.org/jira/browse/KAFKA-8131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797625#comment-16797625 ] ASF GitHub Bot commented on KAFKA-8131: --- soenkeliebau commented on pull request #6480: KAFKA-8131: Moved --version argument implementation from kafka-run-class.sh to CommandLineUtils to include documentation in command line help URL: https://github.com/apache/kafka/pull/6480 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org > Add --version parameter to command line help outputs & docs > --- > > Key: KAFKA-8131 > URL: https://issues.apache.org/jira/browse/KAFKA-8131 > Project: Kafka > Issue Type: Improvement > Components: tools >Reporter: Sönke Liebau >Assignee: Sönke Liebau >Priority: Minor > > KAFKA-2061 added the --version flag to kafka-run-class.sh which prints the > Kafka version to the commandline. > As this is in kafka-run-class.sh this will effectively work for all > commandline tools that use this file to run a class, so it should be added to > the help output of these scripts as well. A quick grep leads me to these > suspects: > * connect-distributed.sh > * connect-standalone.sh > * kafka-acls.sh > * kafka-broker-api-versions.sh > * kafka-configs.sh > * kafka-console-consumer.sh > * kafka-console-producer.sh > * kafka-consumer-groups.sh > * kafka-consumer-perf-test.sh > * kafka-delegation-tokens.sh > * kafka-delete-records.sh > * kafka-dump-log.sh > * kafka-log-dirs.sh > * kafka-mirror-maker.sh > * kafka-preferred-replica-election.sh > * kafka-producer-perf-test.sh > * kafka-reassign-partitions.sh > * kafka-replica-verification.sh > * kafka-server-start.sh > * kafka-streams-application-reset.sh > * kafka-topics.sh > * kafka-verifiable-consumer.sh > * kafka-verifiable-producer.sh > * trogdor.sh > * zookeeper-security-migration.sh > * zookeeper-server-start.sh > * zookeeper-shell.sh > Currently this parameter is not documented at all, neither in the output nor > in the official docs. > I'd propose to add it to the docs as well as part of this issue, I'll look > for a suitable place. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-7925) Constant 100% cpu usage by all kafka brokers
[ https://issues.apache.org/jira/browse/KAFKA-7925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797611#comment-16797611 ] Rajini Sivaram commented on KAFKA-7925: --- [~xabhi] Sorry, we couldn't meet the dates for the v2.2.0 release. > Constant 100% cpu usage by all kafka brokers > > > Key: KAFKA-7925 > URL: https://issues.apache.org/jira/browse/KAFKA-7925 > Project: Kafka > Issue Type: Bug > Components: core >Affects Versions: 2.1.0, 2.1.1 > Environment: Java 11, Kafka v2.1.0, Kafka v2.1.1 >Reporter: Abhi >Priority: Critical > Attachments: jira-server.log-1, jira-server.log-2, jira-server.log-3, > jira-server.log-4, jira-server.log-5, jira-server.log-6, > jira_prod.producer.log, threadump20190212.txt > > > Hi, > I am seeing constant 100% cpu usage on all brokers in our kafka cluster even > without any clients connected to any broker. > This is a bug that we have seen multiple times in our kafka setup that is not > yet open to clients. It is becoming a blocker for our deployment now. > I am seeing lot of connections to other brokers in CLOSE_WAIT state (see > below). In thread usage, I am seeing these threads > 'kafka-network-thread-6-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0,kafka-network-thread-6-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1,kafka-network-thread-6-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-2' > taking up more than 90% of the cpu time in a 60s interval. > I have attached a thread dump of one of the brokers in the cluster. > *Java version:* > openjdk 11.0.2 2019-01-15 > OpenJDK Runtime Environment 18.9 (build 11.0.2+9) > OpenJDK 64-Bit Server VM 18.9 (build 11.0.2+9, mixed mode) > *Kafka verison:* v2.1.0 > > *connections:* > java 144319 kafkagod 88u IPv4 3063266 0t0 TCP *:35395 (LISTEN) > java 144319 kafkagod 89u IPv4 3063267 0t0 TCP *:9144 (LISTEN) > java 144319 kafkagod 104u IPv4 3064219 0t0 TCP > mwkafka-prod-02.tbd:47292->mwkafka-zk-prod-05.tbd:2181 (ESTABLISHED) > java 144319 kafkagod 2003u IPv4 3055115 0t0 TCP *:9092 (LISTEN) > java 144319 kafkagod 2013u IPv4 7220110 0t0 TCP > mwkafka-prod-02.tbd:60724->mwkafka-zk-prod-04.dr:2181 (ESTABLISHED) > java 144319 kafkagod 2020u IPv4 30012904 0t0 TCP > mwkafka-prod-02.tbd:38988->mwkafka-prod-02.nyc:9092 (ESTABLISHED) > java 144319 kafkagod 2021u IPv4 30012961 0t0 TCP > mwkafka-prod-02.tbd:58420->mwkafka-prod-01.nyc:9092 (ESTABLISHED) > java 144319 kafkagod 2027u IPv4 30015723 0t0 TCP > mwkafka-prod-02.tbd:58398->mwkafka-prod-01.nyc:9092 (ESTABLISHED) > java 144319 kafkagod 2028u IPv4 30015630 0t0 TCP > mwkafka-prod-02.tbd:36248->mwkafka-prod-02.dr:9092 (ESTABLISHED) > java 144319 kafkagod 2030u IPv4 30015726 0t0 TCP > mwkafka-prod-02.tbd:39012->mwkafka-prod-02.nyc:9092 (ESTABLISHED) > java 144319 kafkagod 2031u IPv4 30013619 0t0 TCP > mwkafka-prod-02.tbd:38986->mwkafka-prod-02.nyc:9092 (ESTABLISHED) > java 144319 kafkagod 2032u IPv4 30015604 0t0 TCP > mwkafka-prod-02.tbd:36246->mwkafka-prod-02.dr:9092 (ESTABLISHED) > java 144319 kafkagod 2033u IPv4 30012981 0t0 TCP > mwkafka-prod-02.tbd:36924->mwkafka-prod-01.dr:9092 (ESTABLISHED) > java 144319 kafkagod 2034u IPv4 30012967 0t0 TCP > mwkafka-prod-02.tbd:39036->mwkafka-prod-02.nyc:9092 (ESTABLISHED) > java 144319 kafkagod 2035u IPv4 30012898 0t0 TCP > mwkafka-prod-02.tbd:36866->mwkafka-prod-01.dr:9092 (FIN_WAIT2) > java 144319 kafkagod 2036u IPv4 30004729 0t0 TCP > mwkafka-prod-02.tbd:36882->mwkafka-prod-01.dr:9092 (ESTABLISHED) > java 144319 kafkagod 2037u IPv4 30004914 0t0 TCP > mwkafka-prod-02.tbd:58426->mwkafka-prod-01.nyc:9092 (ESTABLISHED) > java 144319 kafkagod 2038u IPv4 30015651 0t0 TCP > mwkafka-prod-02.tbd:36884->mwkafka-prod-01.dr:9092 (ESTABLISHED) > java 144319 kafkagod 2039u IPv4 30012966 0t0 TCP > mwkafka-prod-02.tbd:58422->mwkafka-prod-01.nyc:9092 (ESTABLISHED) > java 144319 kafkagod 2040u IPv4 30005643 0t0 TCP > mwkafka-prod-02.tbd:36252->mwkafka-prod-02.dr:9092 (ESTABLISHED) > java 144319 kafkagod 2041u IPv4 30012944 0t0 TCP > mwkafka-prod-02.tbd:36286->mwkafka-prod-02.dr:9092 (ESTABLISHED) > java 144319 kafkagod 2042u IPv4 30012973 0t0 TCP > mwkafka-prod-02.tbd:9092->mwkafka-prod-01.nyc:51924 (ESTABLISHED) > java 144319 kafkagod 2043u sock 0,7 0t0 30012463 protocol: TCP > java 144319 kafkagod 2044u IPv4 30012979 0t0 TCP > mwkafka-prod-02.tbd:9092->mwkafka-prod-01.dr:39994 (ESTABLISHED) > java 144319 kafkagod 2045u IPv4 30012899 0t0 TCP > mwkafka-prod-02.tbd:9092->mwkafka-prod-02.nyc:34548 (ESTABLISHED) > java 144319 kafkagod 2046u sock 0,7 0t0 30003437 protocol: TCP > java 144319 kafkagod 2047u IPv4 30012980 0t0 TCP > mwkafka-prod-02.tbd:9092->mwkafka-prod-02.dr:38120 (ESTABLISHED) > java 144319 kafkagod 2048u sock 0,7 0t0 30012546 protocol: TCP > java 144319 kafkagod 2049u IPv4 30005418 0t0 TCP >
[jira] [Commented] (KAFKA-7996) KafkaStreams does not pass timeout when closing Producer
[ https://issues.apache.org/jira/browse/KAFKA-7996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797555#comment-16797555 ] Bill Bejeck commented on KAFKA-7996: IMHO we should go ahead and add a config for a "client close timeout" then pass it to the consumer, producer and admin client. While this approach requires a KIP, it's just a config, so I don't it taking too much time or being controversial. Thanks, Bill > KafkaStreams does not pass timeout when closing Producer > > > Key: KAFKA-7996 > URL: https://issues.apache.org/jira/browse/KAFKA-7996 > Project: Kafka > Issue Type: Bug > Components: streams >Affects Versions: 2.1.0 >Reporter: Patrik Kleindl >Assignee: Lee Dongjin >Priority: Major > Labels: needs-kip > > [https://confluentcommunity.slack.com/messages/C48AHTCUQ/convo/C48AHTCUQ-1550831721.026100/] > We are running 2.1 and have a case where the shutdown of a streams > application takes several minutes > I noticed that although we call streams.close with a timeout of 30 seconds > the log says > [Producer > clientId=…-8be49feb-8a2e-4088-bdd7-3c197f6107bb-StreamThread-1-producer] > Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms. > Matthias J Sax [vor 3 Tagen] > I just checked the code, and yes, we don't provide a timeout for the producer > on close()... -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (KAFKA-8134) ProducerConfig.LINGER_MS_CONFIG undocumented breaking change in kafka-clients 2.1
Sam Lendle created KAFKA-8134: - Summary: ProducerConfig.LINGER_MS_CONFIG undocumented breaking change in kafka-clients 2.1 Key: KAFKA-8134 URL: https://issues.apache.org/jira/browse/KAFKA-8134 Project: Kafka Issue Type: Bug Components: clients Affects Versions: 2.1.0 Reporter: Sam Lendle Prior to 2.1, the type of the "linger.ms" config was Long, but was changed to Integer in 2.1.0 ([https://github.com/apache/kafka/pull/5270]) A config using a Long value for that parameter which works with kafka-clients < 2.1 will cause a ConfigException to be thrown when constructing a KafkaProducer if kafka-clients is upgraded to >= 2.1. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-7777) Decouple topic serdes from materialized serdes
[ https://issues.apache.org/jira/browse/KAFKA-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797353#comment-16797353 ] Matthias J. Sax commented on KAFKA-: Sound like a fair request – the DSL changelogging mechanism is not part of public API (and custom stores are not well documented in general :() and it would be helpful to design a proper and easy to use public API for the Processor API. If you are interested to work on this, feel free to create a new ticket – note, this change will require a KIP. > Decouple topic serdes from materialized serdes > -- > > Key: KAFKA- > URL: https://issues.apache.org/jira/browse/KAFKA- > Project: Kafka > Issue Type: Wish > Components: streams >Reporter: Maarten >Priority: Minor > Labels: needs-kip > > It would be valuable to us to have the the encoding format in a Kafka topic > decoupled from the encoding format used to cache the data locally in a kafka > streams app. > We would like to use the `range()` function in the interactive queries API to > look up a series of results, but can't with our encoding scheme due to our > keys being variable length. > We use protobuf, but based on what I've read Avro, Flatbuffers and Cap'n > proto have similar problems. > Currently we use the following code to work around this problem: > {code} > builder > .stream("input-topic", Consumed.with(inputKeySerde, inputValueSerde)) > .to("intermediate-topic", Produced.with(intermediateKeySerde, > intermediateValueSerde)); > t1 = builder > .table("intermediate-topic", Consumed.with(intermediateKeySerde, > intermediateValueSerde), t1Materialized); > {code} > With the encoding formats decoupled, the code above could be reduced to a > single step, not requiring an intermediate topic. > Based on feedback on my [SO > question|https://stackoverflow.com/questions/53913571/is-there-a-way-to-separate-kafka-topic-serdes-from-materialized-serdes] > a change that introduces this would impact state restoration when using an > input topic for recovery. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (KAFKA-7937) Flaky Test ResetConsumerGroupOffsetTest.testResetOffsetsNotExistingGroup
[ https://issues.apache.org/jira/browse/KAFKA-7937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matthias J. Sax updated KAFKA-7937: --- Affects Version/s: 2.1.1 > Flaky Test ResetConsumerGroupOffsetTest.testResetOffsetsNotExistingGroup > > > Key: KAFKA-7937 > URL: https://issues.apache.org/jira/browse/KAFKA-7937 > Project: Kafka > Issue Type: Bug > Components: admin, clients, unit tests >Affects Versions: 2.2.0, 2.1.1, 2.3.0 >Reporter: Matthias J. Sax >Assignee: Gwen Shapira >Priority: Critical > Fix For: 2.3.0, 2.2.1 > > > To get stable nightly builds for `2.2` release, I create tickets for all > observed test failures. > https://builds.apache.org/blue/organizations/jenkins/kafka-2.2-jdk8/detail/kafka-2.2-jdk8/19/pipeline > {quote}kafka.admin.ResetConsumerGroupOffsetTest > > testResetOffsetsNotExistingGroup FAILED > java.util.concurrent.ExecutionException: > org.apache.kafka.common.errors.CoordinatorNotAvailableException: The > coordinator is not available. at > org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45) > at > org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32) > at > org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89) > at > org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260) > at > kafka.admin.ConsumerGroupCommand$ConsumerGroupService.resetOffsets(ConsumerGroupCommand.scala:306) > at > kafka.admin.ResetConsumerGroupOffsetTest.testResetOffsetsNotExistingGroup(ResetConsumerGroupOffsetTest.scala:89) > Caused by: org.apache.kafka.common.errors.CoordinatorNotAvailableException: > The coordinator is not available.{quote} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-7937) Flaky Test ResetConsumerGroupOffsetTest.testResetOffsetsNotExistingGroup
[ https://issues.apache.org/jira/browse/KAFKA-7937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797336#comment-16797336 ] Matthias J. Sax commented on KAFKA-7937: [https://builds.apache.org/blue/organizations/jenkins/kafka-2.1-jdk8/detail/kafka-2.1-jdk8/150/tests] > Flaky Test ResetConsumerGroupOffsetTest.testResetOffsetsNotExistingGroup > > > Key: KAFKA-7937 > URL: https://issues.apache.org/jira/browse/KAFKA-7937 > Project: Kafka > Issue Type: Bug > Components: admin, clients, unit tests >Affects Versions: 2.2.0, 2.3.0 >Reporter: Matthias J. Sax >Assignee: Gwen Shapira >Priority: Critical > Fix For: 2.3.0, 2.2.1 > > > To get stable nightly builds for `2.2` release, I create tickets for all > observed test failures. > https://builds.apache.org/blue/organizations/jenkins/kafka-2.2-jdk8/detail/kafka-2.2-jdk8/19/pipeline > {quote}kafka.admin.ResetConsumerGroupOffsetTest > > testResetOffsetsNotExistingGroup FAILED > java.util.concurrent.ExecutionException: > org.apache.kafka.common.errors.CoordinatorNotAvailableException: The > coordinator is not available. at > org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45) > at > org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32) > at > org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89) > at > org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260) > at > kafka.admin.ConsumerGroupCommand$ConsumerGroupService.resetOffsets(ConsumerGroupCommand.scala:306) > at > kafka.admin.ResetConsumerGroupOffsetTest.testResetOffsetsNotExistingGroup(ResetConsumerGroupOffsetTest.scala:89) > Caused by: org.apache.kafka.common.errors.CoordinatorNotAvailableException: > The coordinator is not available.{quote} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (KAFKA-7937) Flaky Test ResetConsumerGroupOffsetTest.testResetOffsetsNotExistingGroup
[ https://issues.apache.org/jira/browse/KAFKA-7937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matthias J. Sax updated KAFKA-7937: --- Fix Version/s: 2.1.2 > Flaky Test ResetConsumerGroupOffsetTest.testResetOffsetsNotExistingGroup > > > Key: KAFKA-7937 > URL: https://issues.apache.org/jira/browse/KAFKA-7937 > Project: Kafka > Issue Type: Bug > Components: admin, clients, unit tests >Affects Versions: 2.2.0, 2.1.1, 2.3.0 >Reporter: Matthias J. Sax >Assignee: Gwen Shapira >Priority: Critical > Fix For: 2.3.0, 2.1.2, 2.2.1 > > > To get stable nightly builds for `2.2` release, I create tickets for all > observed test failures. > https://builds.apache.org/blue/organizations/jenkins/kafka-2.2-jdk8/detail/kafka-2.2-jdk8/19/pipeline > {quote}kafka.admin.ResetConsumerGroupOffsetTest > > testResetOffsetsNotExistingGroup FAILED > java.util.concurrent.ExecutionException: > org.apache.kafka.common.errors.CoordinatorNotAvailableException: The > coordinator is not available. at > org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45) > at > org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32) > at > org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89) > at > org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260) > at > kafka.admin.ConsumerGroupCommand$ConsumerGroupService.resetOffsets(ConsumerGroupCommand.scala:306) > at > kafka.admin.ResetConsumerGroupOffsetTest.testResetOffsetsNotExistingGroup(ResetConsumerGroupOffsetTest.scala:89) > Caused by: org.apache.kafka.common.errors.CoordinatorNotAvailableException: > The coordinator is not available.{quote} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-8026) Flaky Test RegexSourceIntegrationTest#testRegexMatchesTopicsAWhenDeleted
[ https://issues.apache.org/jira/browse/KAFKA-8026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797328#comment-16797328 ] Matthias J. Sax commented on KAFKA-8026: [https://builds.apache.org/blue/organizations/jenkins/kafka-1.1-jdk7/detail/kafka-1.1-jdk7/253/tests] > Flaky Test RegexSourceIntegrationTest#testRegexMatchesTopicsAWhenDeleted > > > Key: KAFKA-8026 > URL: https://issues.apache.org/jira/browse/KAFKA-8026 > Project: Kafka > Issue Type: Bug > Components: streams, unit tests >Affects Versions: 1.0.2, 1.1.1 >Reporter: Matthias J. Sax >Assignee: Bill Bejeck >Priority: Critical > Labels: flaky-test > Fix For: 1.0.3, 1.1.2 > > > {quote}java.lang.AssertionError: Condition not met within timeout 15000. > Stream tasks not updated > at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:276) > at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:254) > at > org.apache.kafka.streams.integration.RegexSourceIntegrationTest.testRegexMatchesTopicsAWhenDeleted(RegexSourceIntegrationTest.java:215){quote} > Happend in 1.0 and 1.1 builds: > [https://builds.apache.org/blue/organizations/jenkins/kafka-1.0-jdk7/detail/kafka-1.0-jdk7/263/tests/] > and > [https://builds.apache.org/blue/organizations/jenkins/kafka-1.1-jdk7/detail/kafka-1.1-jdk7/249/tests/] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (KAFKA-8133) Flaky Test MetadataRequestTest#testNoTopicsRequest
Matthias J. Sax created KAFKA-8133: -- Summary: Flaky Test MetadataRequestTest#testNoTopicsRequest Key: KAFKA-8133 URL: https://issues.apache.org/jira/browse/KAFKA-8133 Project: Kafka Issue Type: Bug Components: core, unit tests Affects Versions: 2.1.1 Reporter: Matthias J. Sax Fix For: 2.3.0, 2.1.2, 2.2.1 [https://builds.apache.org/blue/organizations/jenkins/kafka-2.1-jdk8/detail/kafka-2.1-jdk8/151/tests] {quote}org.apache.kafka.common.errors.TopicExistsException: Topic 't1' already exists.{quote} STDOUT: {quote}[2019-03-20 03:49:00,982] ERROR [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition isr-after-broker-shutdown-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition. [2019-03-20 03:49:00,982] ERROR [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition isr-after-broker-shutdown-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition. [2019-03-20 03:49:15,319] ERROR [ReplicaFetcher replicaId=1, leaderId=2, fetcherId=0] Error for partition replicaDown-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition. [2019-03-20 03:49:15,319] ERROR [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition replicaDown-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition. [2019-03-20 03:49:20,049] ERROR [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Error for partition testAutoCreate_Topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition. [2019-03-20 03:49:27,080] ERROR [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition __consumer_offsets-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition. [2019-03-20 03:49:27,080] ERROR [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition __consumer_offsets-2 at offset 0 (kafka.server.ReplicaFetcherThread:76) org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition. [2019-03-20 03:49:27,080] ERROR [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition. [2019-03-20 03:49:27,538] ERROR [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Error for partition notInternal-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition. [2019-03-20 03:49:27,538] ERROR [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition notInternal-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition. [2019-03-20 03:49:28,863] WARN Unable to read additional data from client sessionid 0x102fbd81b150003, likely client has closed socket (org.apache.zookeeper.server.NIOServerCnxn:376) [2019-03-20 03:49:40,478] ERROR [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Error for partition t1-2 at offset 0 (kafka.server.ReplicaFetcherThread:76) org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition. [2019-03-20 03:49:40,921] ERROR [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Error for partition t2-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition. [2019-03-20 03:49:40,922] ERROR [ReplicaFetcher replicaId=1, leaderId=2, fetcherId=0] Error for partition t2-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition. [2019-03-20 03:49:52,098] WARN Client session timed out, have not heard from server in 4002ms for sessionid 0x102fbd86a4a0002 (org.apache.zookeeper.ClientCnxn:1112) [2019-03-20 03:49:52,099] WARN Unable to read additional data from client sessionid 0x102fbd86a4a0002, likely client has closed socket (org.apache.zookeeper.server.NIOServerCnxn:376) [2019-03-20 03:49:52,415] WARN
[jira] [Updated] (KAFKA-8059) Flaky Test DynamicConnectionQuotaTest #testDynamicConnectionQuota
[ https://issues.apache.org/jira/browse/KAFKA-8059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matthias J. Sax updated KAFKA-8059: --- Affects Version/s: 2.1.1 > Flaky Test DynamicConnectionQuotaTest #testDynamicConnectionQuota > - > > Key: KAFKA-8059 > URL: https://issues.apache.org/jira/browse/KAFKA-8059 > Project: Kafka > Issue Type: Bug > Components: core, unit tests >Affects Versions: 2.2.0, 2.1.1 >Reporter: Matthias J. Sax >Priority: Critical > Labels: flaky-test > Fix For: 2.2.1 > > > [https://builds.apache.org/blue/organizations/jenkins/kafka-2.2-jdk8/detail/kafka-2.2-jdk8/46/tests] > {quote}org.scalatest.junit.JUnitTestFailedError: Expected exception > java.io.IOException to be thrown, but no exception was thrown > at > org.scalatest.junit.AssertionsForJUnit$class.newAssertionFailedException(AssertionsForJUnit.scala:100) > at > org.scalatest.junit.JUnitSuite.newAssertionFailedException(JUnitSuite.scala:71) > at org.scalatest.Assertions$class.intercept(Assertions.scala:822) > at org.scalatest.junit.JUnitSuite.intercept(JUnitSuite.scala:71) > at > kafka.network.DynamicConnectionQuotaTest.testDynamicConnectionQuota(DynamicConnectionQuotaTest.scala:82){quote} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-8059) Flaky Test DynamicConnectionQuotaTest #testDynamicConnectionQuota
[ https://issues.apache.org/jira/browse/KAFKA-8059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797334#comment-16797334 ] Matthias J. Sax commented on KAFKA-8059: Failed again in 2.1 this time: [https://builds.apache.org/blue/organizations/jenkins/kafka-2.1-jdk8/detail/kafka-2.1-jdk8/150/tests] > Flaky Test DynamicConnectionQuotaTest #testDynamicConnectionQuota > - > > Key: KAFKA-8059 > URL: https://issues.apache.org/jira/browse/KAFKA-8059 > Project: Kafka > Issue Type: Bug > Components: core, unit tests >Affects Versions: 2.2.0 >Reporter: Matthias J. Sax >Priority: Critical > Labels: flaky-test > Fix For: 2.2.1 > > > [https://builds.apache.org/blue/organizations/jenkins/kafka-2.2-jdk8/detail/kafka-2.2-jdk8/46/tests] > {quote}org.scalatest.junit.JUnitTestFailedError: Expected exception > java.io.IOException to be thrown, but no exception was thrown > at > org.scalatest.junit.AssertionsForJUnit$class.newAssertionFailedException(AssertionsForJUnit.scala:100) > at > org.scalatest.junit.JUnitSuite.newAssertionFailedException(JUnitSuite.scala:71) > at org.scalatest.Assertions$class.intercept(Assertions.scala:822) > at org.scalatest.junit.JUnitSuite.intercept(JUnitSuite.scala:71) > at > kafka.network.DynamicConnectionQuotaTest.testDynamicConnectionQuota(DynamicConnectionQuotaTest.scala:82){quote} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (KAFKA-8059) Flaky Test DynamicConnectionQuotaTest #testDynamicConnectionQuota
[ https://issues.apache.org/jira/browse/KAFKA-8059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matthias J. Sax updated KAFKA-8059: --- Fix Version/s: 2.1.2 2.3.0 > Flaky Test DynamicConnectionQuotaTest #testDynamicConnectionQuota > - > > Key: KAFKA-8059 > URL: https://issues.apache.org/jira/browse/KAFKA-8059 > Project: Kafka > Issue Type: Bug > Components: core, unit tests >Affects Versions: 2.2.0, 2.1.1 >Reporter: Matthias J. Sax >Priority: Critical > Labels: flaky-test > Fix For: 2.3.0, 2.1.2, 2.2.1 > > > [https://builds.apache.org/blue/organizations/jenkins/kafka-2.2-jdk8/detail/kafka-2.2-jdk8/46/tests] > {quote}org.scalatest.junit.JUnitTestFailedError: Expected exception > java.io.IOException to be thrown, but no exception was thrown > at > org.scalatest.junit.AssertionsForJUnit$class.newAssertionFailedException(AssertionsForJUnit.scala:100) > at > org.scalatest.junit.JUnitSuite.newAssertionFailedException(JUnitSuite.scala:71) > at org.scalatest.Assertions$class.intercept(Assertions.scala:822) > at org.scalatest.junit.JUnitSuite.intercept(JUnitSuite.scala:71) > at > kafka.network.DynamicConnectionQuotaTest.testDynamicConnectionQuota(DynamicConnectionQuotaTest.scala:82){quote} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (KAFKA-8132) Flaky Test MirrorMakerIntegrationTest #testCommaSeparatedRegex
Matthias J. Sax created KAFKA-8132: -- Summary: Flaky Test MirrorMakerIntegrationTest #testCommaSeparatedRegex Key: KAFKA-8132 URL: https://issues.apache.org/jira/browse/KAFKA-8132 Project: Kafka Issue Type: Bug Components: mirrormaker, unit tests Affects Versions: 2.1.1 Reporter: Matthias J. Sax Fix For: 2.3.0, 2.1.2, 2.2.1 [https://builds.apache.org/blue/organizations/jenkins/kafka-2.1-jdk8/detail/kafka-2.1-jdk8/150/tests] {quote}kafka.tools.MirrorMaker$NoRecordsException at kafka.tools.MirrorMaker$ConsumerWrapper.receive(MirrorMaker.scala:483) at kafka.tools.MirrorMakerIntegrationTest$$anonfun$testCommaSeparatedRegex$1.apply$mcZ$sp(MirrorMakerIntegrationTest.scala:92) at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:742) at kafka.tools.MirrorMakerIntegrationTest.testCommaSeparatedRegex(MirrorMakerIntegrationTest.scala:90){quote} STDOUT (repeatable outputs): {quote}[2019-03-19 21:47:06,115] ERROR [Consumer clientId=consumer-1029, groupId=test-group] Offset commit failed on partition nonexistent-topic1-0 at offset 0: This server does not host this topic-partition. (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:812){quote} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (KAFKA-8106) Remove unnecessary decompression operation when logValidator do validation.
[ https://issues.apache.org/jira/browse/KAFKA-8106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797016#comment-16797016 ] Flower.min edited comment on KAFKA-8106 at 3/20/19 4:17 PM: We suggest that removing unnecessary decompression operation when doing validation for compressed message when magic value to use is above 1 and no format conversion or value overwriting is required for compressed messages.And improved code is as follows. # Add a class *_SimplifiedDefaultRecord_* implementing class *_Record_* which defines various attributes of a message . # Add a function _*simplifiedreadFrom()*_ at class *_DefaultRecord_* .This function will not decompress a full message,it just read size in bytes of record body and compute size in bytes of one record then return a instance of *_SimplifiedDefaultRecord_* . # Add function _*simplifiedIterator()*_ and _*simplifiedCompressedIterator*_() at class *_DefaultRecordBatch_*.This two functions will return iterator of instance belongs to class _*SimplifiedDefaultRecord*_. # Modify code of function *_validateMessagesAndAssignOffsetsCompressed_*() at class *_LogValidator_*. was (Author: flower.min): we suggest that removing unnecessary decompression operation when doing validation for compressed message when magic value to use is above 1 and no format conversion or value overwriting is required for compressed messages.And improved code is as follows. # Add a class *_SimplifiedDefaultRecord_* implementing class *_Record_* which defines various attributes of a message . # Add a function _*simplifiedreadFrom()*_ at class *_DefaultRecord_* .This function will not decompress a full message,it just read size in bytes of record body and compute size in bytes of one record then return a instance of *_SimplifiedDefaultRecord_* . # Add function _*simplifiedIterator()*_ and _*simplifiedCompressedIterator*_() at class *_DefaultRecordBatch_*.This two functions will return iterator of instance belongs to class _*SimplifiedDefaultRecord*_. # Modify code of function *_validateMessagesAndAssignOffsetsCompressed_*() at class *_LogValidator_*. > Remove unnecessary decompression operation when logValidator do validation. > > > Key: KAFKA-8106 > URL: https://issues.apache.org/jira/browse/KAFKA-8106 > Project: Kafka > Issue Type: Bug > Components: core >Affects Versions: 2.1.1 > Environment: Server : > cpu:2*16 ; > MemTotal : 256G; > Ethernet controller:Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network > Connection ; > SSD. >Reporter: Flower.min >Assignee: Flower.min >Priority: Major > Labels: performance > > We do performance testing about Kafka in specific scenarios as > described below .We build a kafka cluster with one broker,and create topics > with different number of partitions.Then we start lots of producer processes > to send large amounts of messages to one of the topics at one testing . > *_Specific Scenario_* > > *_1.Main config of Kafka_* > # Main config of Kafka > server:[num.network.threads=6;num.io.threads=128;queued.max.requests=500|http://num.network.threads%3D6%3Bnum.io.threads%3D128%3Bqueued.max.requests%3D500/] > # Number of TopicPartition : 50~2000 > # Size of Single Message : 1024B > > *_2.Config of KafkaProducer_* > ||compression.type||[linger.ms|http://linger.ms/]||batch.size||buffer.memory|| > |lz4|1000ms~5000ms|16KB/10KB/100KB|128MB| > *_3.The best result of performance testing_* > ||Network inflow rate||CPU Used (%)||Disk write speed||Performance of > production|| > |550MB/s~610MB/s|97%~99%|:550MB/s~610MB/s |23,000,000 messages/s| > *_4.Phenomenon and my doubt_* > _The upper limit of CPU usage has been reached But it does not > reach the upper limit of the bandwidth of the server network. *We are > doubtful about which cost too much CPU time and we want to Improve > performance and reduces CPU usage of Kafka server.*_ > > _*5.Analysis*_ > We analysis the JFIR of Kafka server when doing performance testing > .We found the hot spot method is > *_"java.io.DataInputStream.readFully(byte[],int,int)"_* and > *_"org.apache.kafka.common.record.KafkaLZ4BlockInputStream.read(byte[],int,int)"_*.When > we checking thread stack information we also have found most CPU being > occupied by lots of thread which is busy decompressing messages.Then we > read source code of Kafka . > There is double-layer nested Iterator when LogValidator do validate > every record.And There is a decompression for each message when traversing > every RecordBatch iterator. It is consuming CPU and affect total performance > that decompress message._*The purpose of
[jira] [Comment Edited] (KAFKA-8106) Remove unnecessary decompression operation when logValidator do validation.
[ https://issues.apache.org/jira/browse/KAFKA-8106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797016#comment-16797016 ] Flower.min edited comment on KAFKA-8106 at 3/20/19 4:16 PM: we suggest that removing unnecessary decompression operation when doing validation for compressed message when magic value to use is above 1 and no format conversion or value overwriting is required for compressed messages.And improved code is as follows. # Add a class *_SimplifiedDefaultRecord_* implementing class *_Record_* which defines various attributes of a message . # Add a function _*simplifiedreadFrom()*_ at class *_DefaultRecord_* .This function will not decompress a full message,it just read size in bytes of record body and compute size in bytes of one record then return a instance of *_SimplifiedDefaultRecord_* . # Add function _*simplifiedIterator()*_ and _*simplifiedCompressedIterator*_() at class *_DefaultRecordBatch_*.This two functions will return iterator of instance belongs to class _*SimplifiedDefaultRecord*_. # Modify code of function *_validateMessagesAndAssignOffsetsCompressed_*() at class *_LogValidator_*. was (Author: flower.min): we suggest that removing unnecessary decompression operation when doing validation for compressed message when magic value to use is above 1 and no format conversion or value overwriting is required for compressed messages.And improved code is as follows. # Add a class *_SimplifiedDefaultRecord_* implementing class *_Record_* which define various attributes of a message . # Add a Function _*simplifiedreadFrom()*_ at class DefaultRecord .This function will not decompress a full message,it just read size in bytes of record body and compute size in bytes of one record then return a instance ofSimplifiedDefaultRecord . # Add Function _*simplifiedIterator()*_ and _*simplifiedCompressedIterator*_() at class DefaultRecordBatch.This two functions will return iterator of instance belongs to classSimplifiedDefaultRecord. # Modify code of function *_validateMessagesAndAssignOffsetsCompressed_*() at class LogValidator. > Remove unnecessary decompression operation when logValidator do validation. > > > Key: KAFKA-8106 > URL: https://issues.apache.org/jira/browse/KAFKA-8106 > Project: Kafka > Issue Type: Bug > Components: core >Affects Versions: 2.1.1 > Environment: Server : > cpu:2*16 ; > MemTotal : 256G; > Ethernet controller:Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network > Connection ; > SSD. >Reporter: Flower.min >Assignee: Flower.min >Priority: Major > Labels: performance > > We do performance testing about Kafka in specific scenarios as > described below .We build a kafka cluster with one broker,and create topics > with different number of partitions.Then we start lots of producer processes > to send large amounts of messages to one of the topics at one testing . > *_Specific Scenario_* > > *_1.Main config of Kafka_* > # Main config of Kafka > server:[num.network.threads=6;num.io.threads=128;queued.max.requests=500|http://num.network.threads%3D6%3Bnum.io.threads%3D128%3Bqueued.max.requests%3D500/] > # Number of TopicPartition : 50~2000 > # Size of Single Message : 1024B > > *_2.Config of KafkaProducer_* > ||compression.type||[linger.ms|http://linger.ms/]||batch.size||buffer.memory|| > |lz4|1000ms~5000ms|16KB/10KB/100KB|128MB| > *_3.The best result of performance testing_* > ||Network inflow rate||CPU Used (%)||Disk write speed||Performance of > production|| > |550MB/s~610MB/s|97%~99%|:550MB/s~610MB/s |23,000,000 messages/s| > *_4.Phenomenon and my doubt_* > _The upper limit of CPU usage has been reached But it does not > reach the upper limit of the bandwidth of the server network. *We are > doubtful about which cost too much CPU time and we want to Improve > performance and reduces CPU usage of Kafka server.*_ > > _*5.Analysis*_ > We analysis the JFIR of Kafka server when doing performance testing > .We found the hot spot method is > *_"java.io.DataInputStream.readFully(byte[],int,int)"_* and > *_"org.apache.kafka.common.record.KafkaLZ4BlockInputStream.read(byte[],int,int)"_*.When > we checking thread stack information we also have found most CPU being > occupied by lots of thread which is busy decompressing messages.Then we > read source code of Kafka . > There is double-layer nested Iterator when LogValidator do validate > every record.And There is a decompression for each message when traversing > every RecordBatch iterator. It is consuming CPU and affect total performance > that decompress message._*The purpose of decompressing every messages just
[jira] [Comment Edited] (KAFKA-8106) Remove unnecessary decompression operation when logValidator do validation.
[ https://issues.apache.org/jira/browse/KAFKA-8106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797016#comment-16797016 ] Flower.min edited comment on KAFKA-8106 at 3/20/19 4:13 PM: we suggest that removing unnecessary decompression operation when doing validation for compressed message when magic value to use is above 1 and no format conversion or value overwriting is required for compressed messages.And improved code is as follows. # Add a class *_SimplifiedDefaultRecord_* implementing class *_Record_* which define various attributes of a message . # Add a Function _*simplifiedreadFrom()*_ at class DefaultRecord .This function will not decompress a full message,it just read size in bytes of record body and compute size in bytes of one record then return a instance ofSimplifiedDefaultRecord . # Add Function _*simplifiedIterator()*_ and _*simplifiedCompressedIterator*_() at class DefaultRecordBatch.This two functions will return iterator of instance belongs to classSimplifiedDefaultRecord. # Modify code of function *_validateMessagesAndAssignOffsetsCompressed_*() at class LogValidator. was (Author: flower.min): we suggest that removing unnecessary decompression operation when doing validation for compressed message when magic value to use is above 1 and no format conversion or value overwriting is required for compressed messages.And improved code is as follows. # Adding a class _*SimplifiedDefaultRecord*_ implements class _*Record*_ which define various attributes of a message. # Add a Function _*simplifiedreadFrom()*_ at class DefaultRecord .This function will not decompress a full message,it just read size in bytes of record body and compute size in bytes of one record then return a instance ofSimplifiedDefaultRecord . # Add Function _*simplifiedIterator()*_ and _*simplifiedCompressedIterator*_() at class DefaultRecordBatch.This two functions will return iterator of instance belongs to classSimplifiedDefaultRecord. # Modify code of function *_validateMessagesAndAssignOffsetsCompressed_*() at class LogValidator. > Remove unnecessary decompression operation when logValidator do validation. > > > Key: KAFKA-8106 > URL: https://issues.apache.org/jira/browse/KAFKA-8106 > Project: Kafka > Issue Type: Bug > Components: core >Affects Versions: 2.1.1 > Environment: Server : > cpu:2*16 ; > MemTotal : 256G; > Ethernet controller:Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network > Connection ; > SSD. >Reporter: Flower.min >Assignee: Flower.min >Priority: Major > Labels: performance > > We do performance testing about Kafka in specific scenarios as > described below .We build a kafka cluster with one broker,and create topics > with different number of partitions.Then we start lots of producer processes > to send large amounts of messages to one of the topics at one testing . > *_Specific Scenario_* > > *_1.Main config of Kafka_* > # Main config of Kafka > server:[num.network.threads=6;num.io.threads=128;queued.max.requests=500|http://num.network.threads%3D6%3Bnum.io.threads%3D128%3Bqueued.max.requests%3D500/] > # Number of TopicPartition : 50~2000 > # Size of Single Message : 1024B > > *_2.Config of KafkaProducer_* > ||compression.type||[linger.ms|http://linger.ms/]||batch.size||buffer.memory|| > |lz4|1000ms~5000ms|16KB/10KB/100KB|128MB| > *_3.The best result of performance testing_* > ||Network inflow rate||CPU Used (%)||Disk write speed||Performance of > production|| > |550MB/s~610MB/s|97%~99%|:550MB/s~610MB/s |23,000,000 messages/s| > *_4.Phenomenon and my doubt_* > _The upper limit of CPU usage has been reached But it does not > reach the upper limit of the bandwidth of the server network. *We are > doubtful about which cost too much CPU time and we want to Improve > performance and reduces CPU usage of Kafka server.*_ > > _*5.Analysis*_ > We analysis the JFIR of Kafka server when doing performance testing > .We found the hot spot method is > *_"java.io.DataInputStream.readFully(byte[],int,int)"_* and > *_"org.apache.kafka.common.record.KafkaLZ4BlockInputStream.read(byte[],int,int)"_*.When > we checking thread stack information we also have found most CPU being > occupied by lots of thread which is busy decompressing messages.Then we > read source code of Kafka . > There is double-layer nested Iterator when LogValidator do validate > every record.And There is a decompression for each message when traversing > every RecordBatch iterator. It is consuming CPU and affect total performance > that decompress message._*The purpose of decompressing every messages just > for gain total
[jira] [Comment Edited] (KAFKA-8106) Remove unnecessary decompression operation when logValidator do validation.
[ https://issues.apache.org/jira/browse/KAFKA-8106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797016#comment-16797016 ] Flower.min edited comment on KAFKA-8106 at 3/20/19 4:07 PM: we suggest that removing unnecessary decompression operation when doing validation for compressed message when magic value to use is above 1 and no format conversion or value overwriting is required for compressed messages.And improved code is as follows. # Adding a class _*SimplifiedDefaultRecord*_ implements class _*Record*_ which define various attributes of a message. # Add a Function _*simplifiedreadFrom()*_ at class DefaultRecord .This function will not decompress a full message,it just read size in bytes of record body and compute size in bytes of one record then return a instance ofSimplifiedDefaultRecord . # Add Function _*simplifiedIterator()*_ and _*simplifiedCompressedIterator*_() at class DefaultRecordBatch.This two functions will return iterator of instance belongs to classSimplifiedDefaultRecord. # Modify code of function *_validateMessagesAndAssignOffsetsCompressed_*() at class LogValidator. was (Author: flower.min): we suggest that removing unnecessary decompression operation when doing validation for compressed message when magic value to use is above 1 and no format conversion or value overwriting is required for compressed messages.And improved code is as follows. # Adding a class SimplifiedDefaultRecord implement class Record which define various attributes of a message. # Adding Function _*simplifiedreadFrom()*_ at class DefaultRecord .This function will not decompress a full message,it just read size in bytes of record body and compute size in bytes of one record then return a instance ofSimplifiedDefaultRecord . # Adding Function simplifiedIterator() and _*simplifiedCompressedIterator*_() at class DefaultRecordBatch.This two functions will return iterator of instance belongs to classSimplifiedDefaultRecord. # Modify code of function validateMessagesAndAssignOffsetsCompressed() at class LogValidator. > Remove unnecessary decompression operation when logValidator do validation. > > > Key: KAFKA-8106 > URL: https://issues.apache.org/jira/browse/KAFKA-8106 > Project: Kafka > Issue Type: Bug > Components: core >Affects Versions: 2.1.1 > Environment: Server : > cpu:2*16 ; > MemTotal : 256G; > Ethernet controller:Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network > Connection ; > SSD. >Reporter: Flower.min >Assignee: Flower.min >Priority: Major > Labels: performance > > We do performance testing about Kafka in specific scenarios as > described below .We build a kafka cluster with one broker,and create topics > with different number of partitions.Then we start lots of producer processes > to send large amounts of messages to one of the topics at one testing . > *_Specific Scenario_* > > *_1.Main config of Kafka_* > # Main config of Kafka > server:[num.network.threads=6;num.io.threads=128;queued.max.requests=500|http://num.network.threads%3D6%3Bnum.io.threads%3D128%3Bqueued.max.requests%3D500/] > # Number of TopicPartition : 50~2000 > # Size of Single Message : 1024B > > *_2.Config of KafkaProducer_* > ||compression.type||[linger.ms|http://linger.ms/]||batch.size||buffer.memory|| > |lz4|1000ms~5000ms|16KB/10KB/100KB|128MB| > *_3.The best result of performance testing_* > ||Network inflow rate||CPU Used (%)||Disk write speed||Performance of > production|| > |550MB/s~610MB/s|97%~99%|:550MB/s~610MB/s |23,000,000 messages/s| > *_4.Phenomenon and my doubt_* > _The upper limit of CPU usage has been reached But it does not > reach the upper limit of the bandwidth of the server network. *We are > doubtful about which cost too much CPU time and we want to Improve > performance and reduces CPU usage of Kafka server.*_ > > _*5.Analysis*_ > We analysis the JFIR of Kafka server when doing performance testing > .We found the hot spot method is > *_"java.io.DataInputStream.readFully(byte[],int,int)"_* and > *_"org.apache.kafka.common.record.KafkaLZ4BlockInputStream.read(byte[],int,int)"_*.When > we checking thread stack information we also have found most CPU being > occupied by lots of thread which is busy decompressing messages.Then we > read source code of Kafka . > There is double-layer nested Iterator when LogValidator do validate > every record.And There is a decompression for each message when traversing > every RecordBatch iterator. It is consuming CPU and affect total performance > that decompress message._*The purpose of decompressing every messages just > for gain total size in bytes
[jira] [Resolved] (KAFKA-7723) Kafka Connect support override worker kafka api configuration with connector configuration that post by rest api
[ https://issues.apache.org/jira/browse/KAFKA-7723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] laomei resolved KAFKA-7723. --- Resolution: Duplicate > Kafka Connect support override worker kafka api configuration with connector > configuration that post by rest api > > > Key: KAFKA-7723 > URL: https://issues.apache.org/jira/browse/KAFKA-7723 > Project: Kafka > Issue Type: Improvement > Components: KafkaConnect >Reporter: laomei >Priority: Minor > Labels: needs-kip > > I'm using kafka sink connect; "auto.offset.reset" is set in > connect-distributed*.properties; > It works for all connector which in one worker; So the consumer will poll > records from latest or earliest; I can not control the auto.offset.reset in > connector configs post with rest api; > So I think is necessary to override worker kafka api configs with connector > configs; > Like this > {code:java} > { > "name": "test", > "config": { > "consumer.auto.offset.reset": "latest", > "consumer.xxx" > "connector.class": "com.laomei.sis.solr.SolrConnector", > "tasks.max": "1", > "poll.interval.ms": "100", > "connect.timeout.ms": "6", > "topics": "test" > } > } > {code} > We can override kafka consumer auto offset reset in sink connector; -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-7723) Kafka Connect support override worker kafka api configuration with connector configuration that post by rest api
[ https://issues.apache.org/jira/browse/KAFKA-7723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797260#comment-16797260 ] ASF GitHub Bot commented on KAFKA-7723: --- sweat123 commented on pull request #6026: KAFKA-7723: Support override kafka connect worker api configuration with rest api URL: https://github.com/apache/kafka/pull/6026 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org > Kafka Connect support override worker kafka api configuration with connector > configuration that post by rest api > > > Key: KAFKA-7723 > URL: https://issues.apache.org/jira/browse/KAFKA-7723 > Project: Kafka > Issue Type: Improvement > Components: KafkaConnect >Reporter: laomei >Priority: Minor > Labels: needs-kip > > I'm using kafka sink connect; "auto.offset.reset" is set in > connect-distributed*.properties; > It works for all connector which in one worker; So the consumer will poll > records from latest or earliest; I can not control the auto.offset.reset in > connector configs post with rest api; > So I think is necessary to override worker kafka api configs with connector > configs; > Like this > {code:java} > { > "name": "test", > "config": { > "consumer.auto.offset.reset": "latest", > "consumer.xxx" > "connector.class": "com.laomei.sis.solr.SolrConnector", > "tasks.max": "1", > "poll.interval.ms": "100", > "connect.timeout.ms": "6", > "topics": "test" > } > } > {code} > We can override kafka consumer auto offset reset in sink connector; -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-7723) Kafka Connect support override worker kafka api configuration with connector configuration that post by rest api
[ https://issues.apache.org/jira/browse/KAFKA-7723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797258#comment-16797258 ] laomei commented on KAFKA-7723: --- [~rhauch] I'm sorry for the late response; I have viewed KAFKA-6890 after you mentioned that; I think we can close this issue and I will close the PR soon; Thanks a lot! > Kafka Connect support override worker kafka api configuration with connector > configuration that post by rest api > > > Key: KAFKA-7723 > URL: https://issues.apache.org/jira/browse/KAFKA-7723 > Project: Kafka > Issue Type: Improvement > Components: KafkaConnect >Reporter: laomei >Priority: Minor > Labels: needs-kip > > I'm using kafka sink connect; "auto.offset.reset" is set in > connect-distributed*.properties; > It works for all connector which in one worker; So the consumer will poll > records from latest or earliest; I can not control the auto.offset.reset in > connector configs post with rest api; > So I think is necessary to override worker kafka api configs with connector > configs; > Like this > {code:java} > { > "name": "test", > "config": { > "consumer.auto.offset.reset": "latest", > "consumer.xxx" > "connector.class": "com.laomei.sis.solr.SolrConnector", > "tasks.max": "1", > "poll.interval.ms": "100", > "connect.timeout.ms": "6", > "topics": "test" > } > } > {code} > We can override kafka consumer auto offset reset in sink connector; -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (KAFKA-7978) Flaky Test SaslSslAdminClientIntegrationTest#testConsumerGroups
[ https://issues.apache.org/jira/browse/KAFKA-7978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Manikumar resolved KAFKA-7978. -- Resolution: Duplicate Resolving as duplicate of KAFKA-8098. Feel free to reopen the issue, If occurs again. > Flaky Test SaslSslAdminClientIntegrationTest#testConsumerGroups > --- > > Key: KAFKA-7978 > URL: https://issues.apache.org/jira/browse/KAFKA-7978 > Project: Kafka > Issue Type: Bug > Components: core, unit tests >Affects Versions: 2.2.0, 2.3.0 >Reporter: Matthias J. Sax >Priority: Critical > Labels: flaky-test > Fix For: 2.3.0, 2.2.1 > > > To get stable nightly builds for `2.2` release, I create tickets for all > observed test failures. > [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/25/] > {quote}java.lang.AssertionError: expected:<2> but was:<0> at > org.junit.Assert.fail(Assert.java:88) at > org.junit.Assert.failNotEquals(Assert.java:834) at > org.junit.Assert.assertEquals(Assert.java:645) at > org.junit.Assert.assertEquals(Assert.java:631) at > kafka.api.AdminClientIntegrationTest.testConsumerGroups(AdminClientIntegrationTest.scala:1157) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method){quote} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (KAFKA-8098) Flaky Test AdminClientIntegrationTest#testConsumerGroups
[ https://issues.apache.org/jira/browse/KAFKA-8098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Manikumar resolved KAFKA-8098. -- Resolution: Fixed Fix Version/s: 2.2.1 > Flaky Test AdminClientIntegrationTest#testConsumerGroups > > > Key: KAFKA-8098 > URL: https://issues.apache.org/jira/browse/KAFKA-8098 > Project: Kafka > Issue Type: Bug > Components: core, unit tests >Affects Versions: 2.3.0 >Reporter: Matthias J. Sax >Assignee: huxihx >Priority: Critical > Labels: flaky-test > Fix For: 2.3.0, 2.2.1 > > > [https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/3459/tests] > {quote}java.lang.AssertionError: expected:<2> but was:<0> > at org.junit.Assert.fail(Assert.java:89) > at org.junit.Assert.failNotEquals(Assert.java:835) > at org.junit.Assert.assertEquals(Assert.java:647) > at org.junit.Assert.assertEquals(Assert.java:633) > at > kafka.api.AdminClientIntegrationTest.testConsumerGroups(AdminClientIntegrationTest.scala:1194){quote} > STDOUT > {quote}2019-03-12 10:52:33,482] ERROR [ReplicaFetcher replicaId=2, > leaderId=1, fetcherId=0] Error for partition topic-0 at offset 0 > (kafka.server.ReplicaFetcherThread:76) > org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server > does not host this topic-partition. > [2019-03-12 10:52:33,485] ERROR [ReplicaFetcher replicaId=0, leaderId=1, > fetcherId=0] Error for partition topic-0 at offset 0 > (kafka.server.ReplicaFetcherThread:76) > org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server > does not host this topic-partition. > [2019-03-12 10:52:35,880] WARN Unable to read additional data from client > sessionid 0x104458575770003, likely client has closed socket > (org.apache.zookeeper.server.NIOServerCnxn:376) > [2019-03-12 10:52:38,596] ERROR [ReplicaFetcher replicaId=2, leaderId=0, > fetcherId=0] Error for partition elect-preferred-leaders-topic-1-0 at offset > 0 (kafka.server.ReplicaFetcherThread:76) > org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server > does not host this topic-partition. > [2019-03-12 10:52:38,797] ERROR [ReplicaFetcher replicaId=2, leaderId=0, > fetcherId=0] Error for partition elect-preferred-leaders-topic-2-0 at offset > 0 (kafka.server.ReplicaFetcherThread:76) > org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server > does not host this topic-partition. > [2019-03-12 10:52:51,998] ERROR [ReplicaFetcher replicaId=2, leaderId=0, > fetcherId=0] Error for partition topic-0 at offset 0 > (kafka.server.ReplicaFetcherThread:76) > org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server > does not host this topic-partition. > [2019-03-12 10:52:52,005] ERROR [ReplicaFetcher replicaId=1, leaderId=0, > fetcherId=0] Error for partition topic-0 at offset 0 > (kafka.server.ReplicaFetcherThread:76) > org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server > does not host this topic-partition. > [2019-03-12 10:53:13,750] ERROR [ReplicaFetcher replicaId=0, leaderId=1, > fetcherId=0] Error for partition mytopic2-0 at offset 0 > (kafka.server.ReplicaFetcherThread:76) > org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server > does not host this topic-partition. > [2019-03-12 10:53:13,754] ERROR [ReplicaFetcher replicaId=2, leaderId=1, > fetcherId=0] Error for partition mytopic2-0 at offset 0 > (kafka.server.ReplicaFetcherThread:76) > org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server > does not host this topic-partition. > [2019-03-12 10:53:13,755] ERROR [ReplicaFetcher replicaId=1, leaderId=0, > fetcherId=0] Error for partition mytopic2-1 at offset 0 > (kafka.server.ReplicaFetcherThread:76) > org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server > does not host this topic-partition. > [2019-03-12 10:53:13,760] ERROR [ReplicaFetcher replicaId=2, leaderId=0, > fetcherId=0] Error for partition mytopic2-1 at offset 0 > (kafka.server.ReplicaFetcherThread:76) > org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server > does not host this topic-partition. > [2019-03-12 10:53:13,757] ERROR [ReplicaFetcher replicaId=0, leaderId=2, > fetcherId=0] Error for partition mytopic2-2 at offset 0 > (kafka.server.ReplicaFetcherThread:76) > org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server > does not host this topic-partition. > [2019-03-12 10:53:13,769] ERROR [ReplicaFetcher replicaId=0, leaderId=2, > fetcherId=0] Error for partition mytopic-1 at offset 0 > (kafka.server.ReplicaFetcherThread:76) > org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server > does not host this
[jira] [Commented] (KAFKA-8098) Flaky Test AdminClientIntegrationTest#testConsumerGroups
[ https://issues.apache.org/jira/browse/KAFKA-8098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797198#comment-16797198 ] ASF GitHub Bot commented on KAFKA-8098: --- omkreddy commented on pull request #6474: KAFKA-8098: Fix Flaky Test testConsumerGroups URL: https://github.com/apache/kafka/pull/6474 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org > Flaky Test AdminClientIntegrationTest#testConsumerGroups > > > Key: KAFKA-8098 > URL: https://issues.apache.org/jira/browse/KAFKA-8098 > Project: Kafka > Issue Type: Bug > Components: core, unit tests >Affects Versions: 2.3.0 >Reporter: Matthias J. Sax >Assignee: huxihx >Priority: Critical > Labels: flaky-test > Fix For: 2.3.0 > > > [https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/3459/tests] > {quote}java.lang.AssertionError: expected:<2> but was:<0> > at org.junit.Assert.fail(Assert.java:89) > at org.junit.Assert.failNotEquals(Assert.java:835) > at org.junit.Assert.assertEquals(Assert.java:647) > at org.junit.Assert.assertEquals(Assert.java:633) > at > kafka.api.AdminClientIntegrationTest.testConsumerGroups(AdminClientIntegrationTest.scala:1194){quote} > STDOUT > {quote}2019-03-12 10:52:33,482] ERROR [ReplicaFetcher replicaId=2, > leaderId=1, fetcherId=0] Error for partition topic-0 at offset 0 > (kafka.server.ReplicaFetcherThread:76) > org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server > does not host this topic-partition. > [2019-03-12 10:52:33,485] ERROR [ReplicaFetcher replicaId=0, leaderId=1, > fetcherId=0] Error for partition topic-0 at offset 0 > (kafka.server.ReplicaFetcherThread:76) > org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server > does not host this topic-partition. > [2019-03-12 10:52:35,880] WARN Unable to read additional data from client > sessionid 0x104458575770003, likely client has closed socket > (org.apache.zookeeper.server.NIOServerCnxn:376) > [2019-03-12 10:52:38,596] ERROR [ReplicaFetcher replicaId=2, leaderId=0, > fetcherId=0] Error for partition elect-preferred-leaders-topic-1-0 at offset > 0 (kafka.server.ReplicaFetcherThread:76) > org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server > does not host this topic-partition. > [2019-03-12 10:52:38,797] ERROR [ReplicaFetcher replicaId=2, leaderId=0, > fetcherId=0] Error for partition elect-preferred-leaders-topic-2-0 at offset > 0 (kafka.server.ReplicaFetcherThread:76) > org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server > does not host this topic-partition. > [2019-03-12 10:52:51,998] ERROR [ReplicaFetcher replicaId=2, leaderId=0, > fetcherId=0] Error for partition topic-0 at offset 0 > (kafka.server.ReplicaFetcherThread:76) > org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server > does not host this topic-partition. > [2019-03-12 10:52:52,005] ERROR [ReplicaFetcher replicaId=1, leaderId=0, > fetcherId=0] Error for partition topic-0 at offset 0 > (kafka.server.ReplicaFetcherThread:76) > org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server > does not host this topic-partition. > [2019-03-12 10:53:13,750] ERROR [ReplicaFetcher replicaId=0, leaderId=1, > fetcherId=0] Error for partition mytopic2-0 at offset 0 > (kafka.server.ReplicaFetcherThread:76) > org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server > does not host this topic-partition. > [2019-03-12 10:53:13,754] ERROR [ReplicaFetcher replicaId=2, leaderId=1, > fetcherId=0] Error for partition mytopic2-0 at offset 0 > (kafka.server.ReplicaFetcherThread:76) > org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server > does not host this topic-partition. > [2019-03-12 10:53:13,755] ERROR [ReplicaFetcher replicaId=1, leaderId=0, > fetcherId=0] Error for partition mytopic2-1 at offset 0 > (kafka.server.ReplicaFetcherThread:76) > org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server > does not host this topic-partition. > [2019-03-12 10:53:13,760] ERROR [ReplicaFetcher replicaId=2, leaderId=0, > fetcherId=0] Error for partition mytopic2-1 at offset 0 > (kafka.server.ReplicaFetcherThread:76) > org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server > does not host this topic-partition. > [2019-03-12 10:53:13,757] ERROR [ReplicaFetcher replicaId=0, leaderId=2, > fetcherId=0] Error for partition
[jira] [Commented] (KAFKA-8106) Remove unnecessary decompression operation when logValidator do validation.
[ https://issues.apache.org/jira/browse/KAFKA-8106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797016#comment-16797016 ] Flower.min commented on KAFKA-8106: --- we suggest that removing unnecessary decompression operation when doing validation for compressed message when magic value to use is above 1 and no format conversion or value overwriting is required for compressed messages.And improved code is as follows. # Adding a class SimplifiedDefaultRecord implement class Record which define various attributes of a message. # Adding Function _*simplifiedreadFrom()*_ at class DefaultRecord .This function will not decompress a full message,it just read size in bytes of record body and compute size in bytes of one record then return a instance ofSimplifiedDefaultRecord . # Adding Function simplifiedIterator() and _*simplifiedCompressedIterator*_() at class DefaultRecordBatch.This two functions will return iterator of instance belongs to classSimplifiedDefaultRecord. # Modify code of function validateMessagesAndAssignOffsetsCompressed() at class LogValidator. > Remove unnecessary decompression operation when logValidator do validation. > > > Key: KAFKA-8106 > URL: https://issues.apache.org/jira/browse/KAFKA-8106 > Project: Kafka > Issue Type: Bug > Components: core >Affects Versions: 2.1.1 > Environment: Server : > cpu:2*16 ; > MemTotal : 256G; > Ethernet controller:Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network > Connection ; > SSD. >Reporter: Flower.min >Priority: Major > Labels: performance > > We do performance testing about Kafka in specific scenarios as > described below .We build a kafka cluster with one broker,and create topics > with different number of partitions.Then we start lots of producer processes > to send large amounts of messages to one of the topics at one testing . > *_Specific Scenario_* > > *_1.Main config of Kafka_* > # Main config of Kafka > server:[num.network.threads=6;num.io.threads=128;queued.max.requests=500|http://num.network.threads%3D6%3Bnum.io.threads%3D128%3Bqueued.max.requests%3D500/] > # Number of TopicPartition : 50~2000 > # Size of Single Message : 1024B > > *_2.Config of KafkaProducer_* > ||compression.type||[linger.ms|http://linger.ms/]||batch.size||buffer.memory|| > |lz4|1000ms~5000ms|16KB/10KB/100KB|128MB| > *_3.The best result of performance testing_* > ||Network inflow rate||CPU Used (%)||Disk write speed||Performance of > production|| > |550MB/s~610MB/s|97%~99%|:550MB/s~610MB/s |23,000,000 messages/s| > *_4.Phenomenon and my doubt_* > _The upper limit of CPU usage has been reached But it does not > reach the upper limit of the bandwidth of the server network. *We are > doubtful about which cost too much CPU time and we want to Improve > performance and reduces CPU usage of Kafka server.*_ > > _*5.Analysis*_ > We analysis the JFIR of Kafka server when doing performance testing > .We found the hot spot method is > *_"java.io.DataInputStream.readFully(byte[],int,int)"_* and > *_"org.apache.kafka.common.record.KafkaLZ4BlockInputStream.read(byte[],int,int)"_*.When > we checking thread stack information we also have found most CPU being > occupied by lots of thread which is busy decompressing messages.Then we > read source code of Kafka . > There is double-layer nested Iterator when LogValidator do validate > every record.And There is a decompression for each message when traversing > every RecordBatch iterator. It is consuming CPU and affect total performance > that decompress message._*The purpose of decompressing every messages just > for gain total size in bytes of one record and size in bytes of record body > when magic value to use is above 1 and no format conversion or value > overwriting is required for compressed messages.It is negative for > performance in common usage scenarios .*_{color:#33}Therefore, we suggest > that *_removing unnecessary decompression operation_* when doing validation > for compressed message when magic value to use is above 1 and no format > conversion or value overwriting is required for compressed messages.{color} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-8126) Flaky Test org.apache.kafka.connect.runtime.WorkerTest.testAddRemoveTask
[ https://issues.apache.org/jira/browse/KAFKA-8126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796996#comment-16796996 ] ASF GitHub Bot commented on KAFKA-8126: --- adoroszlai commented on pull request #6475: [KAFKA-8126] Fix flaky tests in WorkerTest URL: https://github.com/apache/kafka/pull/6475 ## What changes were proposed in this pull request? Fix flaky test cases in `WorkerTest` by mocking the `ExecutorService` in `Worker`. Previously, when using a real thread pool executor, the task may or may not have been run by the executor until the end of the test. Related JIRA issues: * [KAFKA-8126](https://issues.apache.org/jira/browse/KAFKA-8126) * [KAFKA-8063](https://issues.apache.org/jira/browse/KAFKA-8063) * [KAFKA-5141](https://issues.apache.org/jira/browse/KAFKA-5141) ## How was this patch tested? Ran all tests (`./gradlew test`). Ran unit tests in `connect/runtime` repeatedly. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org > Flaky Test org.apache.kafka.connect.runtime.WorkerTest.testAddRemoveTask > > > Key: KAFKA-8126 > URL: https://issues.apache.org/jira/browse/KAFKA-8126 > Project: Kafka > Issue Type: Improvement > Components: KafkaConnect, unit tests >Reporter: Guozhang Wang >Priority: Major > > {code} > Stacktrace > java.lang.AssertionError: > Expectation failure on verify: > WorkerSourceTask.run(): expected: 1, actual: 0 > at org.easymock.internal.MocksControl.verify(MocksControl.java:242) > at > org.powermock.api.easymock.internal.invocationcontrol.EasyMockMethodInvocationControl.verify(EasyMockMethodInvocationControl.java:126) > at org.powermock.api.easymock.PowerMock.verify(PowerMock.java:1476) > at org.powermock.api.easymock.PowerMock.verifyAll(PowerMock.java:1415) > at > org.apache.kafka.connect.runtime.WorkerTest.testAddRemoveTask(WorkerTest.java:589) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.base/java.lang.reflect.Method.invoke(Method.java:566) > at org.junit.internal.runners.TestMethod.invoke(TestMethod.java:68) > {code} > https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/3330/testReport/junit/org.apache.kafka.connect.runtime/WorkerTest/testAddRemoveTask/ -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-8098) Flaky Test AdminClientIntegrationTest#testConsumerGroups
[ https://issues.apache.org/jira/browse/KAFKA-8098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796973#comment-16796973 ] ASF GitHub Bot commented on KAFKA-8098: --- huxihx commented on pull request #6474: merge KAFKA-8098 patch to 2.2 URL: https://github.com/apache/kafka/pull/6474 *More detailed description of your change, if necessary. The PR title and PR message become the squashed commit message, so use a separate comment to ping reviewers.* *Summary of testing strategy (including rationale) for the feature or bug fix. Unit and/or integration tests are expected for any behaviour change and system tests should be considered for larger changes.* ### Committer Checklist (excluded from commit message) - [ ] Verify design and implementation - [ ] Verify test coverage and CI build status - [ ] Verify documentation (including upgrade notes) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org > Flaky Test AdminClientIntegrationTest#testConsumerGroups > > > Key: KAFKA-8098 > URL: https://issues.apache.org/jira/browse/KAFKA-8098 > Project: Kafka > Issue Type: Bug > Components: core, unit tests >Affects Versions: 2.3.0 >Reporter: Matthias J. Sax >Assignee: huxihx >Priority: Critical > Labels: flaky-test > Fix For: 2.3.0 > > > [https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/3459/tests] > {quote}java.lang.AssertionError: expected:<2> but was:<0> > at org.junit.Assert.fail(Assert.java:89) > at org.junit.Assert.failNotEquals(Assert.java:835) > at org.junit.Assert.assertEquals(Assert.java:647) > at org.junit.Assert.assertEquals(Assert.java:633) > at > kafka.api.AdminClientIntegrationTest.testConsumerGroups(AdminClientIntegrationTest.scala:1194){quote} > STDOUT > {quote}2019-03-12 10:52:33,482] ERROR [ReplicaFetcher replicaId=2, > leaderId=1, fetcherId=0] Error for partition topic-0 at offset 0 > (kafka.server.ReplicaFetcherThread:76) > org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server > does not host this topic-partition. > [2019-03-12 10:52:33,485] ERROR [ReplicaFetcher replicaId=0, leaderId=1, > fetcherId=0] Error for partition topic-0 at offset 0 > (kafka.server.ReplicaFetcherThread:76) > org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server > does not host this topic-partition. > [2019-03-12 10:52:35,880] WARN Unable to read additional data from client > sessionid 0x104458575770003, likely client has closed socket > (org.apache.zookeeper.server.NIOServerCnxn:376) > [2019-03-12 10:52:38,596] ERROR [ReplicaFetcher replicaId=2, leaderId=0, > fetcherId=0] Error for partition elect-preferred-leaders-topic-1-0 at offset > 0 (kafka.server.ReplicaFetcherThread:76) > org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server > does not host this topic-partition. > [2019-03-12 10:52:38,797] ERROR [ReplicaFetcher replicaId=2, leaderId=0, > fetcherId=0] Error for partition elect-preferred-leaders-topic-2-0 at offset > 0 (kafka.server.ReplicaFetcherThread:76) > org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server > does not host this topic-partition. > [2019-03-12 10:52:51,998] ERROR [ReplicaFetcher replicaId=2, leaderId=0, > fetcherId=0] Error for partition topic-0 at offset 0 > (kafka.server.ReplicaFetcherThread:76) > org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server > does not host this topic-partition. > [2019-03-12 10:52:52,005] ERROR [ReplicaFetcher replicaId=1, leaderId=0, > fetcherId=0] Error for partition topic-0 at offset 0 > (kafka.server.ReplicaFetcherThread:76) > org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server > does not host this topic-partition. > [2019-03-12 10:53:13,750] ERROR [ReplicaFetcher replicaId=0, leaderId=1, > fetcherId=0] Error for partition mytopic2-0 at offset 0 > (kafka.server.ReplicaFetcherThread:76) > org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server > does not host this topic-partition. > [2019-03-12 10:53:13,754] ERROR [ReplicaFetcher replicaId=2, leaderId=1, > fetcherId=0] Error for partition mytopic2-0 at offset 0 > (kafka.server.ReplicaFetcherThread:76) > org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server > does not host this topic-partition. > [2019-03-12 10:53:13,755] ERROR [ReplicaFetcher replicaId=1, leaderId=0, > fetcherId=0] Error for
[jira] [Commented] (KAFKA-7925) Constant 100% cpu usage by all kafka brokers
[ https://issues.apache.org/jira/browse/KAFKA-7925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796961#comment-16796961 ] Abhi commented on KAFKA-7925: - [~rsivaram] Any updates on whether this will be available in Kafka v2.2.0 release ? > Constant 100% cpu usage by all kafka brokers > > > Key: KAFKA-7925 > URL: https://issues.apache.org/jira/browse/KAFKA-7925 > Project: Kafka > Issue Type: Bug > Components: core >Affects Versions: 2.1.0, 2.1.1 > Environment: Java 11, Kafka v2.1.0, Kafka v2.1.1 >Reporter: Abhi >Priority: Critical > Attachments: jira-server.log-1, jira-server.log-2, jira-server.log-3, > jira-server.log-4, jira-server.log-5, jira-server.log-6, > jira_prod.producer.log, threadump20190212.txt > > > Hi, > I am seeing constant 100% cpu usage on all brokers in our kafka cluster even > without any clients connected to any broker. > This is a bug that we have seen multiple times in our kafka setup that is not > yet open to clients. It is becoming a blocker for our deployment now. > I am seeing lot of connections to other brokers in CLOSE_WAIT state (see > below). In thread usage, I am seeing these threads > 'kafka-network-thread-6-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0,kafka-network-thread-6-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1,kafka-network-thread-6-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-2' > taking up more than 90% of the cpu time in a 60s interval. > I have attached a thread dump of one of the brokers in the cluster. > *Java version:* > openjdk 11.0.2 2019-01-15 > OpenJDK Runtime Environment 18.9 (build 11.0.2+9) > OpenJDK 64-Bit Server VM 18.9 (build 11.0.2+9, mixed mode) > *Kafka verison:* v2.1.0 > > *connections:* > java 144319 kafkagod 88u IPv4 3063266 0t0 TCP *:35395 (LISTEN) > java 144319 kafkagod 89u IPv4 3063267 0t0 TCP *:9144 (LISTEN) > java 144319 kafkagod 104u IPv4 3064219 0t0 TCP > mwkafka-prod-02.tbd:47292->mwkafka-zk-prod-05.tbd:2181 (ESTABLISHED) > java 144319 kafkagod 2003u IPv4 3055115 0t0 TCP *:9092 (LISTEN) > java 144319 kafkagod 2013u IPv4 7220110 0t0 TCP > mwkafka-prod-02.tbd:60724->mwkafka-zk-prod-04.dr:2181 (ESTABLISHED) > java 144319 kafkagod 2020u IPv4 30012904 0t0 TCP > mwkafka-prod-02.tbd:38988->mwkafka-prod-02.nyc:9092 (ESTABLISHED) > java 144319 kafkagod 2021u IPv4 30012961 0t0 TCP > mwkafka-prod-02.tbd:58420->mwkafka-prod-01.nyc:9092 (ESTABLISHED) > java 144319 kafkagod 2027u IPv4 30015723 0t0 TCP > mwkafka-prod-02.tbd:58398->mwkafka-prod-01.nyc:9092 (ESTABLISHED) > java 144319 kafkagod 2028u IPv4 30015630 0t0 TCP > mwkafka-prod-02.tbd:36248->mwkafka-prod-02.dr:9092 (ESTABLISHED) > java 144319 kafkagod 2030u IPv4 30015726 0t0 TCP > mwkafka-prod-02.tbd:39012->mwkafka-prod-02.nyc:9092 (ESTABLISHED) > java 144319 kafkagod 2031u IPv4 30013619 0t0 TCP > mwkafka-prod-02.tbd:38986->mwkafka-prod-02.nyc:9092 (ESTABLISHED) > java 144319 kafkagod 2032u IPv4 30015604 0t0 TCP > mwkafka-prod-02.tbd:36246->mwkafka-prod-02.dr:9092 (ESTABLISHED) > java 144319 kafkagod 2033u IPv4 30012981 0t0 TCP > mwkafka-prod-02.tbd:36924->mwkafka-prod-01.dr:9092 (ESTABLISHED) > java 144319 kafkagod 2034u IPv4 30012967 0t0 TCP > mwkafka-prod-02.tbd:39036->mwkafka-prod-02.nyc:9092 (ESTABLISHED) > java 144319 kafkagod 2035u IPv4 30012898 0t0 TCP > mwkafka-prod-02.tbd:36866->mwkafka-prod-01.dr:9092 (FIN_WAIT2) > java 144319 kafkagod 2036u IPv4 30004729 0t0 TCP > mwkafka-prod-02.tbd:36882->mwkafka-prod-01.dr:9092 (ESTABLISHED) > java 144319 kafkagod 2037u IPv4 30004914 0t0 TCP > mwkafka-prod-02.tbd:58426->mwkafka-prod-01.nyc:9092 (ESTABLISHED) > java 144319 kafkagod 2038u IPv4 30015651 0t0 TCP > mwkafka-prod-02.tbd:36884->mwkafka-prod-01.dr:9092 (ESTABLISHED) > java 144319 kafkagod 2039u IPv4 30012966 0t0 TCP > mwkafka-prod-02.tbd:58422->mwkafka-prod-01.nyc:9092 (ESTABLISHED) > java 144319 kafkagod 2040u IPv4 30005643 0t0 TCP > mwkafka-prod-02.tbd:36252->mwkafka-prod-02.dr:9092 (ESTABLISHED) > java 144319 kafkagod 2041u IPv4 30012944 0t0 TCP > mwkafka-prod-02.tbd:36286->mwkafka-prod-02.dr:9092 (ESTABLISHED) > java 144319 kafkagod 2042u IPv4 30012973 0t0 TCP > mwkafka-prod-02.tbd:9092->mwkafka-prod-01.nyc:51924 (ESTABLISHED) > java 144319 kafkagod 2043u sock 0,7 0t0 30012463 protocol: TCP > java 144319 kafkagod 2044u IPv4 30012979 0t0 TCP > mwkafka-prod-02.tbd:9092->mwkafka-prod-01.dr:39994 (ESTABLISHED) > java 144319 kafkagod 2045u IPv4 30012899 0t0 TCP > mwkafka-prod-02.tbd:9092->mwkafka-prod-02.nyc:34548 (ESTABLISHED) > java 144319 kafkagod 2046u sock 0,7 0t0 30003437 protocol: TCP > java 144319 kafkagod 2047u IPv4 30012980 0t0 TCP > mwkafka-prod-02.tbd:9092->mwkafka-prod-02.dr:38120 (ESTABLISHED) > java 144319 kafkagod 2048u sock 0,7 0t0 30012546 protocol: TCP > java 144319 kafkagod 2049u IPv4 30005418 0t0 TCP >
[jira] [Created] (KAFKA-8131) Add --version parameter to command line help outputs & docs
Sönke Liebau created KAFKA-8131: --- Summary: Add --version parameter to command line help outputs & docs Key: KAFKA-8131 URL: https://issues.apache.org/jira/browse/KAFKA-8131 Project: Kafka Issue Type: Improvement Components: tools Reporter: Sönke Liebau Assignee: Sönke Liebau KAFKA-2061 added the --version flag to kafka-run-class.sh which prints the Kafka version to the commandline. As this is in kafka-run-class.sh this will effectively work for all commandline tools that use this file to run a class, so it should be added to the help output of these scripts as well. A quick grep leads me to these suspects: * connect-distributed.sh * connect-standalone.sh * kafka-acls.sh * kafka-broker-api-versions.sh * kafka-configs.sh * kafka-console-consumer.sh * kafka-console-producer.sh * kafka-consumer-groups.sh * kafka-consumer-perf-test.sh * kafka-delegation-tokens.sh * kafka-delete-records.sh * kafka-dump-log.sh * kafka-log-dirs.sh * kafka-mirror-maker.sh * kafka-preferred-replica-election.sh * kafka-producer-perf-test.sh * kafka-reassign-partitions.sh * kafka-replica-verification.sh * kafka-server-start.sh * kafka-streams-application-reset.sh * kafka-topics.sh * kafka-verifiable-consumer.sh * kafka-verifiable-producer.sh * trogdor.sh * zookeeper-security-migration.sh * zookeeper-server-start.sh * zookeeper-shell.sh Currently this parameter is not documented at all, neither in the output nor in the official docs. I'd propose to add it to the docs as well as part of this issue, I'll look for a suitable place. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (KAFKA-8130) The consumer is not closed in GetOffsetShell, will exhausted socket channel when frequent calls
ouyangwulin created KAFKA-8130: -- Summary: The consumer is not closed in GetOffsetShell, will exhausted socket channel when frequent calls Key: KAFKA-8130 URL: https://issues.apache.org/jira/browse/KAFKA-8130 Project: Kafka Issue Type: Bug Components: core Affects Versions: 0.10.2.2 Reporter: ouyangwulin Fix For: 0.10.2.2 When use command "bin/kafka-run-class.sh kafka.tools.GetOffsetShell --topic test --time -2 --broker-list 127.0.0.1:9092 --partitions 1" frequently, or use code call kafka.tools.GetOffsetShell method more then socket limit. It will show us "Too many open files" error. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-8130) The consumer is not closed in GetOffsetShell, will exhausted socket channel when frequent calls
[ https://issues.apache.org/jira/browse/KAFKA-8130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796874#comment-16796874 ] ASF GitHub Bot commented on KAFKA-8130: --- Mrart commented on pull request #6473: KAFKA-8130:The consumer is not closed in GetOffsetShell, will exhausted socket channel when frequent calls URL: https://github.com/apache/kafka/pull/6473 When use command "bin/kafka-run-class.sh kafka.tools.GetOffsetShell --topic test --time -2 --broker-list 127.0.0.1:9092 --partitions 1" frequently, or use code call kafka.tools.GetOffsetShell method more then socket limit. It will show us "Too many open files" error. ### Committer Checklist (excluded from commit message) - [ ] Verify design and implementation - [ ] Verify test coverage and CI build status - [ ] Verify documentation (including upgrade notes) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org > The consumer is not closed in GetOffsetShell, will exhausted socket channel > when frequent calls > > > Key: KAFKA-8130 > URL: https://issues.apache.org/jira/browse/KAFKA-8130 > Project: Kafka > Issue Type: Bug > Components: core >Affects Versions: 0.10.2.2 >Reporter: ouyangwulin >Priority: Trivial > Labels: pull-request-available > Fix For: 0.10.2.2 > > > When use command "bin/kafka-run-class.sh kafka.tools.GetOffsetShell --topic > test --time -2 --broker-list 127.0.0.1:9092 --partitions 1" frequently, or > use code call kafka.tools.GetOffsetShell method more then socket limit. It > will show us "Too many open files" error. -- This message was sent by Atlassian JIRA (v7.6.3#76005)