[jira] [Created] (KAFKA-8117) Flaky Test DelegationTokenEndToEndAuthorizationTest#testNoProduceWithDescribeAcl

2019-03-16 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-8117:
--

 Summary: Flaky Test 
DelegationTokenEndToEndAuthorizationTest#testNoProduceWithDescribeAcl
 Key: KAFKA-8117
 URL: https://issues.apache.org/jira/browse/KAFKA-8117
 Project: Kafka
  Issue Type: Bug
  Components: core, unit tests
Affects Versions: 2.3.0
Reporter: Matthias J. Sax
 Fix For: 2.3.0


[https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/3470/tests]
{quote}java.util.concurrent.ExecutionException: 
org.apache.kafka.common.errors.SaslAuthenticationException: Authentication 
failed during authentication due to invalid credentials with SASL mechanism 
SCRAM-SHA-256
at 
org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
at 
org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
at 
org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
at 
org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
at 
kafka.api.DelegationTokenEndToEndAuthorizationTest.createDelegationToken(DelegationTokenEndToEndAuthorizationTest.scala:88)
at 
kafka.api.DelegationTokenEndToEndAuthorizationTest.configureSecurityAfterServersStart(DelegationTokenEndToEndAuthorizationTest.scala:63)
at 
kafka.integration.KafkaServerTestHarness.setUp(KafkaServerTestHarness.scala:107)
at kafka.api.IntegrationTestHarness.doSetup(IntegrationTestHarness.scala:81)
at kafka.api.IntegrationTestHarness.setUp(IntegrationTestHarness.scala:73)
at 
kafka.api.EndToEndAuthorizationTest.setUp(EndToEndAuthorizationTest.scala:183)
at 
kafka.api.DelegationTokenEndToEndAuthorizationTest.setUp(DelegationTokenEndToEndAuthorizationTest.scala:74){quote}
STDOUT
{quote}Adding ACLs for resource `Cluster:LITERAL:kafka-cluster`:
User:scram-admin has Allow permission for operations: ClusterAction from hosts: 
*
 
Current ACLs for resource `Cluster:LITERAL:kafka-cluster`:
User:scram-admin has Allow permission for operations: ClusterAction from hosts: 
*
 
Adding ACLs for resource `Topic:LITERAL:*`:
User:scram-admin has Allow permission for operations: Read from hosts: *
 
Current ACLs for resource `Topic:LITERAL:*`:
User:scram-admin has Allow permission for operations: Read from hosts: *
 
Completed Updating config for entity: user-principal 'scram-admin'.
Completed Updating config for entity: user-principal 'scram-user'.
Adding ACLs for resource `Topic:LITERAL:e2etopic`:
User:scram-user has Allow permission for operations: Write from hosts: *
User:scram-user has Allow permission for operations: Create from hosts: *
User:scram-user has Allow permission for operations: Describe from hosts: *
 
Current ACLs for resource `Topic:LITERAL:e2etopic`:
User:scram-user has Allow permission for operations: Write from hosts: *
User:scram-user has Allow permission for operations: Create from hosts: *
User:scram-user has Allow permission for operations: Describe from hosts: *
 
Adding ACLs for resource `Group:LITERAL:group`:
User:scram-user has Allow permission for operations: Read from hosts: *
 
Current ACLs for resource `Group:LITERAL:group`:
User:scram-user has Allow permission for operations: Read from hosts: *
 
Current ACLs for resource `Topic:LITERAL:e2etopic`:
User:scram-user has Allow permission for operations: Write from hosts: *
User:scram-user has Allow permission for operations: Create from hosts: *
 
Current ACLs for resource `Topic:LITERAL:e2etopic`:
User:scram-user has Allow permission for operations: Create from hosts: *
 
[2019-03-16 03:39:08,540] WARN Unable to read additional data from client 
sessionid 0x1045892126e0013, likely client has closed socket 
(org.apache.zookeeper.server.NIOServerCnxn:376)
[2019-03-16 03:39:08,634] ERROR [Consumer clientId=consumer-21, groupId=group] 
Topic authorization failed for topics [e2etopic] 
(org.apache.kafka.clients.Metadata:297)
Adding ACLs for resource `Cluster:LITERAL:kafka-cluster`:
User:scram-admin has Allow permission for operations: ClusterAction from hosts: 
*
 
Current ACLs for resource `Cluster:LITERAL:kafka-cluster`:
User:scram-admin has Allow permission for operations: ClusterAction from hosts: 
*
 
Adding ACLs for resource `Topic:LITERAL:*`:
User:scram-admin has Allow permission for operations: Read from hosts: *
 
Current ACLs for resource `Topic:LITERAL:*`:
User:scram-admin has Allow permission for operations: Read from hosts: *
 
Completed Updating config for entity: user-principal 'scram-admin'.
Completed Updating config for entity: user-principal 'scram-user'.
Adding ACLs for resource `Topic:PREFIXED:e2e`:
User:scram-user has Allow permission for operations: Read from hosts: *
User:scram-user has Allow permission for operations: Describe from hosts: *
User:scram-user has Allow permission for operations: Write from hosts: *
User:scram-user has Allow permission for oper

[jira] [Commented] (KAFKA-8117) Flaky Test DelegationTokenEndToEndAuthorizationTest#testNoProduceWithDescribeAcl

2019-03-16 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16794350#comment-16794350
 ] 

Matthias J. Sax commented on KAFKA-8117:


[~omkreddy], [~rsivaram]: the stack trace if different so I am not sure if the 
latest fix of KAFKA-8114 would address this, too. Thus, I created a new ticket. 
If you think it fixed with KAFKA-8114, just close this ticket.

> Flaky Test 
> DelegationTokenEndToEndAuthorizationTest#testNoProduceWithDescribeAcl
> 
>
> Key: KAFKA-8117
> URL: https://issues.apache.org/jira/browse/KAFKA-8117
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/3470/tests]
> {quote}java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.SaslAuthenticationException: Authentication 
> failed during authentication due to invalid credentials with SASL mechanism 
> SCRAM-SHA-256
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
> at 
> kafka.api.DelegationTokenEndToEndAuthorizationTest.createDelegationToken(DelegationTokenEndToEndAuthorizationTest.scala:88)
> at 
> kafka.api.DelegationTokenEndToEndAuthorizationTest.configureSecurityAfterServersStart(DelegationTokenEndToEndAuthorizationTest.scala:63)
> at 
> kafka.integration.KafkaServerTestHarness.setUp(KafkaServerTestHarness.scala:107)
> at kafka.api.IntegrationTestHarness.doSetup(IntegrationTestHarness.scala:81)
> at kafka.api.IntegrationTestHarness.setUp(IntegrationTestHarness.scala:73)
> at 
> kafka.api.EndToEndAuthorizationTest.setUp(EndToEndAuthorizationTest.scala:183)
> at 
> kafka.api.DelegationTokenEndToEndAuthorizationTest.setUp(DelegationTokenEndToEndAuthorizationTest.scala:74){quote}
> STDOUT
> {quote}Adding ACLs for resource `Cluster:LITERAL:kafka-cluster`:
> User:scram-admin has Allow permission for operations: ClusterAction from 
> hosts: *
>  
> Current ACLs for resource `Cluster:LITERAL:kafka-cluster`:
> User:scram-admin has Allow permission for operations: ClusterAction from 
> hosts: *
>  
> Adding ACLs for resource `Topic:LITERAL:*`:
> User:scram-admin has Allow permission for operations: Read from hosts: *
>  
> Current ACLs for resource `Topic:LITERAL:*`:
> User:scram-admin has Allow permission for operations: Read from hosts: *
>  
> Completed Updating config for entity: user-principal 'scram-admin'.
> Completed Updating config for entity: user-principal 'scram-user'.
> Adding ACLs for resource `Topic:LITERAL:e2etopic`:
> User:scram-user has Allow permission for operations: Write from hosts: *
> User:scram-user has Allow permission for operations: Create from hosts: *
> User:scram-user has Allow permission for operations: Describe from hosts: *
>  
> Current ACLs for resource `Topic:LITERAL:e2etopic`:
> User:scram-user has Allow permission for operations: Write from hosts: *
> User:scram-user has Allow permission for operations: Create from hosts: *
> User:scram-user has Allow permission for operations: Describe from hosts: *
>  
> Adding ACLs for resource `Group:LITERAL:group`:
> User:scram-user has Allow permission for operations: Read from hosts: *
>  
> Current ACLs for resource `Group:LITERAL:group`:
> User:scram-user has Allow permission for operations: Read from hosts: *
>  
> Current ACLs for resource `Topic:LITERAL:e2etopic`:
> User:scram-user has Allow permission for operations: Write from hosts: *
> User:scram-user has Allow permission for operations: Create from hosts: *
>  
> Current ACLs for resource `Topic:LITERAL:e2etopic`:
> User:scram-user has Allow permission for operations: Create from hosts: *
>  
> [2019-03-16 03:39:08,540] WARN Unable to read additional data from client 
> sessionid 0x1045892126e0013, likely client has closed socket 
> (org.apache.zookeeper.server.NIOServerCnxn:376)
> [2019-03-16 03:39:08,634] ERROR [Consumer clientId=consumer-21, 
> groupId=group] Topic authorization failed for topics [e2etopic] 
> (org.apache.kafka.clients.Metadata:297)
> Adding ACLs for resource `Cluster:LITERAL:kafka-cluster`:
> User:scram-admin has Allow permission for operations: ClusterAction from 
> hosts: *
>  
> Current ACLs for resource `Cluster:LITERAL:kafka-cluster`:
> User:scram-admin has Allow permission for operations: ClusterAction

[jira] [Commented] (KAFKA-8030) Flaky Test TopicCommandWithAdminClientTest#testDescribeUnderMinIsrPartitionsMixed

2019-03-16 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16794351#comment-16794351
 ] 

Matthias J. Sax commented on KAFKA-8030:


One more: 
[https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/3471/tests]

> Flaky Test 
> TopicCommandWithAdminClientTest#testDescribeUnderMinIsrPartitionsMixed
> -
>
> Key: KAFKA-8030
> URL: https://issues.apache.org/jira/browse/KAFKA-8030
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Assignee: Viktor Somogyi-Vass
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/2830/testReport/junit/kafka.admin/TopicCommandWithAdminClientTest/testDescribeUnderMinIsrPartitionsMixed/]
> {quote}java.lang.AssertionError at org.junit.Assert.fail(Assert.java:87) at 
> org.junit.Assert.assertTrue(Assert.java:42) at 
> org.junit.Assert.assertTrue(Assert.java:53) at 
> kafka.admin.TopicCommandWithAdminClientTest.testDescribeUnderMinIsrPartitionsMixed(TopicCommandWithAdminClientTest.scala:602){quote}
> STDERR
> {quote}Option "[replica-assignment]" can't be used with option 
> "[partitions]"{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8030) Flaky Test TopicCommandWithAdminClientTest#testDescribeUnderMinIsrPartitionsMixed

2019-03-16 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16794364#comment-16794364
 ] 

Matthias J. Sax commented on KAFKA-8030:


Again: 
[https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/3279/testReport/junit/kafka.admin/TopicCommandWithAdminClientTest/testDescribeUnderMinIsrPartitionsMixed/]

> Flaky Test 
> TopicCommandWithAdminClientTest#testDescribeUnderMinIsrPartitionsMixed
> -
>
> Key: KAFKA-8030
> URL: https://issues.apache.org/jira/browse/KAFKA-8030
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Assignee: Viktor Somogyi-Vass
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/2830/testReport/junit/kafka.admin/TopicCommandWithAdminClientTest/testDescribeUnderMinIsrPartitionsMixed/]
> {quote}java.lang.AssertionError at org.junit.Assert.fail(Assert.java:87) at 
> org.junit.Assert.assertTrue(Assert.java:42) at 
> org.junit.Assert.assertTrue(Assert.java:53) at 
> kafka.admin.TopicCommandWithAdminClientTest.testDescribeUnderMinIsrPartitionsMixed(TopicCommandWithAdminClientTest.scala:602){quote}
> STDERR
> {quote}Option "[replica-assignment]" can't be used with option 
> "[partitions]"{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8115) Flaky Test CoordinatorTest#testTaskRequestWithOldStartMsGetsUpdated

2019-03-16 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16794366#comment-16794366
 ] 

Matthias J. Sax commented on KAFKA-8115:


Failed again: 
[https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/3279/testReport/junit/org.apache.kafka.trogdor.coordinator/CoordinatorTest/testTaskRequestWithOldStartMsGetsUpdated/]

> Flaky Test CoordinatorTest#testTaskRequestWithOldStartMsGetsUpdated
> ---
>
> Key: KAFKA-8115
> URL: https://issues.apache.org/jira/browse/KAFKA-8115
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/3254/testReport/junit/org.apache.kafka.trogdor.coordinator/CoordinatorTest/testTaskRequestWithOldStartMsGetsUpdated/]
> {quote}org.junit.runners.model.TestTimedOutException: test timed out after 
> 12 milliseconds at java.base@11.0.1/jdk.internal.misc.Unsafe.park(Native 
> Method) at 
> java.base@11.0.1/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
>  at 
> java.base@11.0.1/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2123)
>  at 
> java.base@11.0.1/java.util.concurrent.ThreadPoolExecutor.awaitTermination(ThreadPoolExecutor.java:1454)
>  at 
> java.base@11.0.1/java.util.concurrent.Executors$DelegatedExecutorService.awaitTermination(Executors.java:709)
>  at 
> app//org.apache.kafka.trogdor.rest.JsonRestServer.waitForShutdown(JsonRestServer.java:157)
>  at app//org.apache.kafka.trogdor.agent.Agent.waitForShutdown(Agent.java:123) 
> at 
> app//org.apache.kafka.trogdor.common.MiniTrogdorCluster.close(MiniTrogdorCluster.java:285)
>  at 
> app//org.apache.kafka.trogdor.coordinator.CoordinatorTest.testTaskRequestWithOldStartMsGetsUpdated(CoordinatorTest.java:596)
>  at 
> java.base@11.0.1/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) at 
> java.base@11.0.1/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> java.base@11.0.1/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.base@11.0.1/java.lang.reflect.Method.invoke(Method.java:566) at 
> app//org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  at 
> app//org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> app//org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>  at 
> app//org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> app//org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:288)
>  at 
> app//org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:282)
>  at java.base@11.0.1/java.util.concurrent.FutureTask.run(FutureTask.java:264) 
> at java.base@11.0.1/java.lang.Thread.run(Thread.java:834){quote}
> STDOUT
> {quote}[2019-03-15 09:23:41,364] INFO Creating MiniTrogdorCluster with 
> agents: node02 and coordinator: node01 
> (org.apache.kafka.trogdor.common.MiniTrogdorCluster:135) [2019-03-15 
> 09:23:41,595] INFO Logging initialized @13340ms to 
> org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log:193) 
> [2019-03-15 09:23:41,752] INFO Starting REST server 
> (org.apache.kafka.trogdor.rest.JsonRestServer:89) [2019-03-15 09:23:41,912] 
> INFO Registered resource 
> org.apache.kafka.trogdor.agent.AgentRestResource@3fa38ceb 
> (org.apache.kafka.trogdor.rest.JsonRestServer:94) [2019-03-15 09:23:42,178] 
> INFO jetty-9.4.14.v20181114; built: 2018-11-14T21:20:31.478Z; git: 
> c4550056e785fb5665914545889f21dc136ad9e6; jvm 11.0.1+13-LTS 
> (org.eclipse.jetty.server.Server:370) [2019-03-15 09:23:42,360] INFO 
> DefaultSessionIdManager workerName=node0 
> (org.eclipse.jetty.server.session:365) [2019-03-15 09:23:42,362] INFO No 
> SessionScavenger set, using defaults (org.eclipse.jetty.server.session:370) 
> [2019-03-15 09:23:42,370] INFO node0 Scavenging every 66ms 
> (org.eclipse.jetty.server.session:149) [2019-03-15 09:23:44,412] INFO Started 
> o.e.j.s.ServletContextHandler@335a5293\{/,null,AVAILABLE} 
> (org.eclipse.jetty.server.handler.ContextHandler:855) [2019-03-15 
> 09:23:44,473] INFO Started 
> ServerConnector@79a93bf1\{HTTP/1.1,[http/1.1]}{0.0.0.0:33477} 
> (org.eclipse.jetty.server.AbstractConnector:292) [2019-03-15 09:23:44,474] 
> INFO Started @16219ms (org.eclipse.jetty.server.Server:407) [2019-03-15 
> 09:23:44,475] INFO REST server listening at [http://127.0.1.1:33477/] 
> (org.apache.kafka.t

[jira] [Commented] (KAFKA-7965) Flaky Test ConsumerBounceTest#testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup

2019-03-16 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16794369#comment-16794369
 ] 

Matthias J. Sax commented on KAFKA-7965:


One more: 
[https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/20397/testReport/junit/kafka.api/ConsumerBounceTest/testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup/]

org.scalatest.junit.JUnitTestFailedError: Should have received an class 
org.apache.kafka.common.errors.GroupMaxSizeReachedException during the cluster 
roll

> Flaky Test 
> ConsumerBounceTest#testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup
> 
>
> Key: KAFKA-7965
> URL: https://issues.apache.org/jira/browse/KAFKA-7965
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer, unit tests
>Affects Versions: 2.2.0, 2.3.0
>Reporter: Matthias J. Sax
>Assignee: Stanislav Kozlovski
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.1
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/21/]
> {quote}java.lang.AssertionError: Received 0, expected at least 68 at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.assertTrue(Assert.java:41) at 
> kafka.api.ConsumerBounceTest.receiveAndCommit(ConsumerBounceTest.scala:557) 
> at 
> kafka.api.ConsumerBounceTest.$anonfun$testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup$1(ConsumerBounceTest.scala:320)
>  at 
> kafka.api.ConsumerBounceTest.$anonfun$testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup$1$adapted(ConsumerBounceTest.scala:319)
>  at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) 
> at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) 
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at 
> kafka.api.ConsumerBounceTest.testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup(ConsumerBounceTest.scala:319){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-4332) kafka.api.UserQuotaTest.testThrottledProducerConsumer transient unit test failure

2019-03-16 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-4332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16794365#comment-16794365
 ] 

Matthias J. Sax commented on KAFKA-4332:


Failed again: 
[https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/3279/testReport/junit/kafka.api/UserQuotaTest/testThrottledProducerConsumer/]

> kafka.api.UserQuotaTest.testThrottledProducerConsumer transient unit test 
> failure
> -
>
> Key: KAFKA-4332
> URL: https://issues.apache.org/jira/browse/KAFKA-4332
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core, unit tests
>Affects Versions: 0.10.1.0, 2.3.0
>Reporter: Jun Rao
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> kafka.api.UserQuotaTest > testThrottledProducerConsumer FAILED
> java.lang.AssertionError: Should have been throttled



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8026) Flaky Test RegexSourceIntegrationTest#testRegexMatchesTopicsAWhenDeleted

2019-03-17 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16794535#comment-16794535
 ] 

Matthias J. Sax commented on KAFKA-8026:


Failed again in `1.1` 
[https://builds.apache.org/blue/organizations/jenkins/kafka-1.1-jdk7/detail/kafka-1.1-jdk7/252/tests]

and `1.0`: 
[https://builds.apache.org/blue/organizations/jenkins/kafka-1.0-jdk7/detail/kafka-1.0-jdk7/264/changes/]

> Flaky Test RegexSourceIntegrationTest#testRegexMatchesTopicsAWhenDeleted
> 
>
> Key: KAFKA-8026
> URL: https://issues.apache.org/jira/browse/KAFKA-8026
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Affects Versions: 1.0.2, 1.1.1
>Reporter: Matthias J. Sax
>Assignee: Bill Bejeck
>Priority: Critical
>  Labels: flaky-test
> Fix For: 1.0.3, 1.1.2
>
>
> {quote}java.lang.AssertionError: Condition not met within timeout 15000. 
> Stream tasks not updated
> at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:276)
> at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:254)
> at 
> org.apache.kafka.streams.integration.RegexSourceIntegrationTest.testRegexMatchesTopicsAWhenDeleted(RegexSourceIntegrationTest.java:215){quote}
> Happend in 1.0 and 1.1 builds:
> [https://builds.apache.org/blue/organizations/jenkins/kafka-1.0-jdk7/detail/kafka-1.0-jdk7/263/tests/]
> and
> [https://builds.apache.org/blue/organizations/jenkins/kafka-1.1-jdk7/detail/kafka-1.1-jdk7/249/tests/]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7965) Flaky Test ConsumerBounceTest#testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup

2019-03-18 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16795173#comment-16795173
 ] 

Matthias J. Sax commented on KAFKA-7965:


[https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk11/detail/kafka-trunk-jdk11/379/tests]

> Flaky Test 
> ConsumerBounceTest#testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup
> 
>
> Key: KAFKA-7965
> URL: https://issues.apache.org/jira/browse/KAFKA-7965
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer, unit tests
>Affects Versions: 2.2.0, 2.3.0
>Reporter: Matthias J. Sax
>Assignee: Stanislav Kozlovski
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.1
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/21/]
> {quote}java.lang.AssertionError: Received 0, expected at least 68 at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.assertTrue(Assert.java:41) at 
> kafka.api.ConsumerBounceTest.receiveAndCommit(ConsumerBounceTest.scala:557) 
> at 
> kafka.api.ConsumerBounceTest.$anonfun$testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup$1(ConsumerBounceTest.scala:320)
>  at 
> kafka.api.ConsumerBounceTest.$anonfun$testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup$1$adapted(ConsumerBounceTest.scala:319)
>  at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) 
> at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) 
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at 
> kafka.api.ConsumerBounceTest.testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup(ConsumerBounceTest.scala:319){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-8122) Flaky Test EosIntegrationTest#shouldNotViolateEosIfOneTaskFailsWithState

2019-03-18 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax reassigned KAFKA-8122:
--

Assignee: Matthias J. Sax

> Flaky Test EosIntegrationTest#shouldNotViolateEosIfOneTaskFailsWithState
> 
>
> Key: KAFKA-8122
> URL: https://issues.apache.org/jira/browse/KAFKA-8122
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/3285/testReport/junit/org.apache.kafka.streams.integration/EosIntegrationTest/shouldNotViolateEosIfOneTaskFailsWithState/]
> {quote}java.lang.AssertionError: Expected: <[KeyValue(0, 0), KeyValue(0, 1), 
> KeyValue(0, 3), KeyValue(0, 6), KeyValue(0, 10), KeyValue(0, 15), KeyValue(0, 
> 21), KeyValue(0, 28), KeyValue(0, 36), KeyValue(0, 45)]> but: was 
> <[KeyValue(0, 0), KeyValue(0, 1), KeyValue(0, 3), KeyValue(0, 6), KeyValue(0, 
> 10), KeyValue(0, 15), KeyValue(0, 21), KeyValue(0, 28), KeyValue(0, 36), 
> KeyValue(0, 45), KeyValue(0, 55), KeyValue(0, 66), KeyValue(0, 78), 
> KeyValue(0, 91), KeyValue(0, 105)]> at 
> org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:18) at 
> org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:6) at 
> org.apache.kafka.streams.integration.EosIntegrationTest.checkResultPerKey(EosIntegrationTest.java:212)
>  at 
> org.apache.kafka.streams.integration.EosIntegrationTest.shouldNotViolateEosIfOneTaskFailsWithState(EosIntegrationTest.java:414){quote}
> STDOUT
> {quote}[2019-03-17 01:19:51,971] INFO Created server with tickTime 800 
> minSessionTimeout 1600 maxSessionTimeout 16000 datadir 
> /tmp/kafka-10997967593034298484/version-2 snapdir 
> /tmp/kafka-5184295822696533708/version-2 
> (org.apache.zookeeper.server.ZooKeeperServer:174) [2019-03-17 01:19:51,971] 
> INFO binding to port /127.0.0.1:0 
> (org.apache.zookeeper.server.NIOServerCnxnFactory:89) [2019-03-17 
> 01:19:51,973] INFO KafkaConfig values: advertised.host.name = null 
> advertised.listeners = null advertised.port = null 
> alter.config.policy.class.name = null 
> alter.log.dirs.replication.quota.window.num = 11 
> alter.log.dirs.replication.quota.window.size.seconds = 1 
> authorizer.class.name = auto.create.topics.enable = false 
> auto.leader.rebalance.enable = true background.threads = 10 broker.id = 0 
> broker.id.generation.enable = true broker.rack = null 
> client.quota.callback.class = null compression.type = producer 
> connection.failed.authentication.delay.ms = 100 connections.max.idle.ms = 
> 60 connections.max.reauth.ms = 0 control.plane.listener.name = null 
> controlled.shutdown.enable = true controlled.shutdown.max.retries = 3 
> controlled.shutdown.retry.backoff.ms = 5000 controller.socket.timeout.ms = 
> 3 create.topic.policy.class.name = null default.replication.factor = 1 
> delegation.token.expiry.check.interval.ms = 360 
> delegation.token.expiry.time.ms = 8640 delegation.token.master.key = null 
> delegation.token.max.lifetime.ms = 60480 
> delete.records.purgatory.purge.interval.requests = 1 delete.topic.enable = 
> true fetch.purgatory.purge.interval.requests = 1000 
> group.initial.rebalance.delay.ms = 0 group.max.session.timeout.ms = 30 
> group.max.size = 2147483647 group.min.session.timeout.ms = 0 host.name = 
> localhost inter.broker.listener.name = null inter.broker.protocol.version = 
> 2.2-IV1 kafka.metrics.polling.interval.secs = 10 kafka.metrics.reporters = [] 
> leader.imbalance.check.interval.seconds = 300 
> leader.imbalance.per.broker.percentage = 10 listener.security.protocol.map = 
> PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL 
> listeners = null log.cleaner.backoff.ms = 15000 
> log.cleaner.dedupe.buffer.size = 2097152 log.cleaner.delete.retention.ms = 
> 8640 log.cleaner.enable = true log.cleaner.io.buffer.load.factor = 0.9 
> log.cleaner.io.buffer.size = 524288 log.cleaner.io.max.bytes.per.second = 
> 1.7976931348623157E308 log.cleaner.min.cleanable.ratio = 0.5 
> log.cleaner.min.compaction.lag.ms = 0 log.cleaner.threads = 1 
> log.cleanup.policy = [delete] log.dir = 
> /tmp/junit16020146621422955757/junit17406374597406011269 log.dirs = null 
> log.flush.interval.messages = 9223372036854775807 log.flush.interval.ms = 
> null log.flush.offset.checkpoint.interval.ms = 6 
> log.flush.scheduler.interval.ms = 9223372036854775807 
> log.flush.start.offset.checkpoint.interval.ms = 6 
> log.index.interval.bytes = 4096 log.index.size.max.bytes = 10485760 
> log.message.downconversion.enable = true log.message.format.version = 2.2-IV1 
> log.message.timestam

[jira] [Created] (KAFKA-8122) Flaky Test EosIntegrationTest#shouldNotViolateEosIfOneTaskFailsWithState

2019-03-18 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-8122:
--

 Summary: Flaky Test 
EosIntegrationTest#shouldNotViolateEosIfOneTaskFailsWithState
 Key: KAFKA-8122
 URL: https://issues.apache.org/jira/browse/KAFKA-8122
 Project: Kafka
  Issue Type: Bug
  Components: streams, unit tests
Affects Versions: 2.3.0
Reporter: Matthias J. Sax
 Fix For: 2.3.0


[https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/3285/testReport/junit/org.apache.kafka.streams.integration/EosIntegrationTest/shouldNotViolateEosIfOneTaskFailsWithState/]
{quote}java.lang.AssertionError: Expected: <[KeyValue(0, 0), KeyValue(0, 1), 
KeyValue(0, 3), KeyValue(0, 6), KeyValue(0, 10), KeyValue(0, 15), KeyValue(0, 
21), KeyValue(0, 28), KeyValue(0, 36), KeyValue(0, 45)]> but: was <[KeyValue(0, 
0), KeyValue(0, 1), KeyValue(0, 3), KeyValue(0, 6), KeyValue(0, 10), 
KeyValue(0, 15), KeyValue(0, 21), KeyValue(0, 28), KeyValue(0, 36), KeyValue(0, 
45), KeyValue(0, 55), KeyValue(0, 66), KeyValue(0, 78), KeyValue(0, 91), 
KeyValue(0, 105)]> at 
org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:18) at 
org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:6) at 
org.apache.kafka.streams.integration.EosIntegrationTest.checkResultPerKey(EosIntegrationTest.java:212)
 at 
org.apache.kafka.streams.integration.EosIntegrationTest.shouldNotViolateEosIfOneTaskFailsWithState(EosIntegrationTest.java:414){quote}
STDOUT
{quote}[2019-03-17 01:19:51,971] INFO Created server with tickTime 800 
minSessionTimeout 1600 maxSessionTimeout 16000 datadir 
/tmp/kafka-10997967593034298484/version-2 snapdir 
/tmp/kafka-5184295822696533708/version-2 
(org.apache.zookeeper.server.ZooKeeperServer:174) [2019-03-17 01:19:51,971] 
INFO binding to port /127.0.0.1:0 
(org.apache.zookeeper.server.NIOServerCnxnFactory:89) [2019-03-17 01:19:51,973] 
INFO KafkaConfig values: advertised.host.name = null advertised.listeners = 
null advertised.port = null alter.config.policy.class.name = null 
alter.log.dirs.replication.quota.window.num = 11 
alter.log.dirs.replication.quota.window.size.seconds = 1 authorizer.class.name 
= auto.create.topics.enable = false auto.leader.rebalance.enable = true 
background.threads = 10 broker.id = 0 broker.id.generation.enable = true 
broker.rack = null client.quota.callback.class = null compression.type = 
producer connection.failed.authentication.delay.ms = 100 
connections.max.idle.ms = 60 connections.max.reauth.ms = 0 
control.plane.listener.name = null controlled.shutdown.enable = true 
controlled.shutdown.max.retries = 3 controlled.shutdown.retry.backoff.ms = 5000 
controller.socket.timeout.ms = 3 create.topic.policy.class.name = null 
default.replication.factor = 1 delegation.token.expiry.check.interval.ms = 
360 delegation.token.expiry.time.ms = 8640 delegation.token.master.key 
= null delegation.token.max.lifetime.ms = 60480 
delete.records.purgatory.purge.interval.requests = 1 delete.topic.enable = true 
fetch.purgatory.purge.interval.requests = 1000 group.initial.rebalance.delay.ms 
= 0 group.max.session.timeout.ms = 30 group.max.size = 2147483647 
group.min.session.timeout.ms = 0 host.name = localhost 
inter.broker.listener.name = null inter.broker.protocol.version = 2.2-IV1 
kafka.metrics.polling.interval.secs = 10 kafka.metrics.reporters = [] 
leader.imbalance.check.interval.seconds = 300 
leader.imbalance.per.broker.percentage = 10 listener.security.protocol.map = 
PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL 
listeners = null log.cleaner.backoff.ms = 15000 log.cleaner.dedupe.buffer.size 
= 2097152 log.cleaner.delete.retention.ms = 8640 log.cleaner.enable = true 
log.cleaner.io.buffer.load.factor = 0.9 log.cleaner.io.buffer.size = 524288 
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 
log.cleaner.min.cleanable.ratio = 0.5 log.cleaner.min.compaction.lag.ms = 0 
log.cleaner.threads = 1 log.cleanup.policy = [delete] log.dir = 
/tmp/junit16020146621422955757/junit17406374597406011269 log.dirs = null 
log.flush.interval.messages = 9223372036854775807 log.flush.interval.ms = null 
log.flush.offset.checkpoint.interval.ms = 6 log.flush.scheduler.interval.ms 
= 9223372036854775807 log.flush.start.offset.checkpoint.interval.ms = 6 
log.index.interval.bytes = 4096 log.index.size.max.bytes = 10485760 
log.message.downconversion.enable = true log.message.format.version = 2.2-IV1 
log.message.timestamp.difference.max.ms = 9223372036854775807 
log.message.timestamp.type = CreateTime log.preallocate = false 
log.retention.bytes = -1 log.retention.check.interval.ms = 30 
log.retention.hours = 168 log.retention.minutes = null log.retention.ms = null 
log.roll.hours = 168 log.roll.jitter.hours = 0 log.roll.jitter.ms = null 
log.roll.ms = null log.segment.bytes = 1073741824 log.segment.delete.delay.ms = 
6 max.connections = 2147483647 max.connections.p

[jira] [Assigned] (KAFKA-8078) Flaky Test TableTableJoinIntegrationTest#testInnerInner

2019-03-18 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax reassigned KAFKA-8078:
--

Assignee: Matthias J. Sax

> Flaky Test TableTableJoinIntegrationTest#testInnerInner
> ---
>
> Key: KAFKA-8078
> URL: https://issues.apache.org/jira/browse/KAFKA-8078
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/3445/tests]
> {quote}java.lang.AssertionError: Condition not met within timeout 15000. 
> Never received expected final result.
> at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:365)
> at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:325)
> at 
> org.apache.kafka.streams.integration.AbstractJoinIntegrationTest.runTest(AbstractJoinIntegrationTest.java:246)
> at 
> org.apache.kafka.streams.integration.TableTableJoinIntegrationTest.testInnerInner(TableTableJoinIntegrationTest.java:196){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6409) LogRecoveryTest (testHWCheckpointWithFailuresSingleLogSegment) is flaky

2019-03-18 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16795358#comment-16795358
 ] 

Matthias J. Sax commented on KAFKA-6409:


Seems to be a duplicate of KAFKA-5492 ?

Also, this test failed again: 
[https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/20440/testReport/junit/kafka.server/LogRecoveryTest/testHWCheckpointWithFailuresSingleLogSegment/]

 

> LogRecoveryTest (testHWCheckpointWithFailuresSingleLogSegment) is flaky
> ---
>
> Key: KAFKA-6409
> URL: https://issues.apache.org/jira/browse/KAFKA-6409
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Reporter: Wladimir Schmidt
>Priority: Major
>
> In the LogRecoveryTest the test named 
> testHWCheckpointWithFailuresSingleLogSegment is affected and not stable. 
> Sometimes it passes, sometimes it is not.
> Scala 2.12. JDK9
> java.lang.AssertionError: Timing out after 3 ms since a new leader that 
> is different from 1 was not elected for partition new-topic-0, leader is 
> Some(1)
>   at kafka.utils.TestUtils$.fail(TestUtils.scala:351)
>   at 
> kafka.utils.TestUtils$.$anonfun$waitUntilLeaderIsElectedOrChanged$8(TestUtils.scala:828)
>   at scala.Option.getOrElse(Option.scala:121)
>   at 
> kafka.utils.TestUtils$.waitUntilLeaderIsElectedOrChanged(TestUtils.scala:818)
>   at 
> kafka.server.LogRecoveryTest.testHWCheckpointWithFailuresSingleLogSegment(LogRecoveryTest.scala:152)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:564)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at jdk.internal.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:564)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy1.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:108)
>   at jdk.internal.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
> 

[jira] [Commented] (KAFKA-5492) LogRecoveryTest.testHWCheckpointWithFailuresSingleLogSegment transient failure

2019-03-18 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-5492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16795359#comment-16795359
 ] 

Matthias J. Sax commented on KAFKA-5492:


Seems to be a duplicate of KAFKA-6409 ?

Also, this test failed again: 
[https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/20440/testReport/junit/kafka.server/LogRecoveryTest/testHWCheckpointWithFailuresSingleLogSegment/]

> LogRecoveryTest.testHWCheckpointWithFailuresSingleLogSegment transient failure
> --
>
> Key: KAFKA-5492
> URL: https://issues.apache.org/jira/browse/KAFKA-5492
> Project: Kafka
>  Issue Type: Sub-task
>  Components: unit tests
>Reporter: Jason Gustafson
>Priority: Major
>
> {code}
> ava.lang.AssertionError: Timing out after 3 ms since a new leader that is 
> different from 1 was not elected for partition new-topic-0, leader is Some(1)
>   at kafka.utils.TestUtils$.fail(TestUtils.scala:333)
>   at 
> kafka.utils.TestUtils$.$anonfun$waitUntilLeaderIsElectedOrChanged$8(TestUtils.scala:808)
>   at scala.Option.getOrElse(Option.scala:121)
>   at 
> kafka.utils.TestUtils$.waitUntilLeaderIsElectedOrChanged(TestUtils.scala:798)
>   at 
> kafka.server.LogRecoveryTest.testHWCheckpointWithFailuresSingleLogSegment(LogRecoveryTest.scala:152)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (KAFKA-8107) Flaky Test kafka.api.ClientIdQuotaTest.testQuotaOverrideDelete

2019-03-18 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax reopened KAFKA-8107:


> Flaky Test kafka.api.ClientIdQuotaTest.testQuotaOverrideDelete
> --
>
> Key: KAFKA-8107
> URL: https://issues.apache.org/jira/browse/KAFKA-8107
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Guozhang Wang
>Priority: Major
>
> {code}
> java.lang.AssertionError: Client with id=QuotasTestProducer-!@#$%^&*() should 
> have been throttled
>   at org.junit.Assert.fail(Assert.java:89)
>   at org.junit.Assert.assertTrue(Assert.java:42)
>   at 
> kafka.api.QuotaTestClients.verifyThrottleTimeMetric(BaseQuotaTest.scala:229)
>   at 
> kafka.api.QuotaTestClients.verifyProduceThrottle(BaseQuotaTest.scala:215)
>   at 
> kafka.api.BaseQuotaTest.testQuotaOverrideDelete(BaseQuotaTest.scala:124)
> {code}
> https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/3230/testReport/junit/kafka.api/ClientIdQuotaTest/testQuotaOverrideDelete/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-8107) Flaky Test kafka.api.ClientIdQuotaTest.testQuotaOverrideDelete

2019-03-18 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-8107.

Resolution: Duplicate

> Flaky Test kafka.api.ClientIdQuotaTest.testQuotaOverrideDelete
> --
>
> Key: KAFKA-8107
> URL: https://issues.apache.org/jira/browse/KAFKA-8107
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Guozhang Wang
>Priority: Major
>
> {code}
> java.lang.AssertionError: Client with id=QuotasTestProducer-!@#$%^&*() should 
> have been throttled
>   at org.junit.Assert.fail(Assert.java:89)
>   at org.junit.Assert.assertTrue(Assert.java:42)
>   at 
> kafka.api.QuotaTestClients.verifyThrottleTimeMetric(BaseQuotaTest.scala:229)
>   at 
> kafka.api.QuotaTestClients.verifyProduceThrottle(BaseQuotaTest.scala:215)
>   at 
> kafka.api.BaseQuotaTest.testQuotaOverrideDelete(BaseQuotaTest.scala:124)
> {code}
> https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/3230/testReport/junit/kafka.api/ClientIdQuotaTest/testQuotaOverrideDelete/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8107) Flaky Test kafka.api.ClientIdQuotaTest.testQuotaOverrideDelete

2019-03-18 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16795463#comment-16795463
 ] 

Matthias J. Sax commented on KAFKA-8107:


Failed again: 
[https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk11/detail/kafka-trunk-jdk11/382/tests]

> Flaky Test kafka.api.ClientIdQuotaTest.testQuotaOverrideDelete
> --
>
> Key: KAFKA-8107
> URL: https://issues.apache.org/jira/browse/KAFKA-8107
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Guozhang Wang
>Priority: Major
>
> {code}
> java.lang.AssertionError: Client with id=QuotasTestProducer-!@#$%^&*() should 
> have been throttled
>   at org.junit.Assert.fail(Assert.java:89)
>   at org.junit.Assert.assertTrue(Assert.java:42)
>   at 
> kafka.api.QuotaTestClients.verifyThrottleTimeMetric(BaseQuotaTest.scala:229)
>   at 
> kafka.api.QuotaTestClients.verifyProduceThrottle(BaseQuotaTest.scala:215)
>   at 
> kafka.api.BaseQuotaTest.testQuotaOverrideDelete(BaseQuotaTest.scala:124)
> {code}
> https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/3230/testReport/junit/kafka.api/ClientIdQuotaTest/testQuotaOverrideDelete/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8107) Flaky Test kafka.api.ClientIdQuotaTest.testQuotaOverrideDelete

2019-03-18 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-8107:
---
Priority: Critical  (was: Major)

> Flaky Test kafka.api.ClientIdQuotaTest.testQuotaOverrideDelete
> --
>
> Key: KAFKA-8107
> URL: https://issues.apache.org/jira/browse/KAFKA-8107
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Priority: Critical
>
> {code}
> java.lang.AssertionError: Client with id=QuotasTestProducer-!@#$%^&*() should 
> have been throttled
>   at org.junit.Assert.fail(Assert.java:89)
>   at org.junit.Assert.assertTrue(Assert.java:42)
>   at 
> kafka.api.QuotaTestClients.verifyThrottleTimeMetric(BaseQuotaTest.scala:229)
>   at 
> kafka.api.QuotaTestClients.verifyProduceThrottle(BaseQuotaTest.scala:215)
>   at 
> kafka.api.BaseQuotaTest.testQuotaOverrideDelete(BaseQuotaTest.scala:124)
> {code}
> https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/3230/testReport/junit/kafka.api/ClientIdQuotaTest/testQuotaOverrideDelete/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8107) Flaky Test kafka.api.ClientIdQuotaTest.testQuotaOverrideDelete

2019-03-18 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-8107:
---
Issue Type: Bug  (was: Improvement)

> Flaky Test kafka.api.ClientIdQuotaTest.testQuotaOverrideDelete
> --
>
> Key: KAFKA-8107
> URL: https://issues.apache.org/jira/browse/KAFKA-8107
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Priority: Major
>
> {code}
> java.lang.AssertionError: Client with id=QuotasTestProducer-!@#$%^&*() should 
> have been throttled
>   at org.junit.Assert.fail(Assert.java:89)
>   at org.junit.Assert.assertTrue(Assert.java:42)
>   at 
> kafka.api.QuotaTestClients.verifyThrottleTimeMetric(BaseQuotaTest.scala:229)
>   at 
> kafka.api.QuotaTestClients.verifyProduceThrottle(BaseQuotaTest.scala:215)
>   at 
> kafka.api.BaseQuotaTest.testQuotaOverrideDelete(BaseQuotaTest.scala:124)
> {code}
> https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/3230/testReport/junit/kafka.api/ClientIdQuotaTest/testQuotaOverrideDelete/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Issue Comment Deleted] (KAFKA-8031) Flaky Test UserClientIdQuotaTest#testQuotaOverrideDelete

2019-03-18 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-8031:
---
Comment: was deleted

(was: Failed again: 
[https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk11/detail/kafka-trunk-jdk11/382/tests])

> Flaky Test UserClientIdQuotaTest#testQuotaOverrideDelete
> 
>
> Key: KAFKA-8031
> URL: https://issues.apache.org/jira/browse/KAFKA-8031
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/2830/testReport/junit/kafka.api/UserClientIdQuotaTest/testQuotaOverrideDelete/]
> {quote}java.lang.AssertionError: Client with id=QuotasTestConsumer-!@#$%^&*() 
> should have been throttled at org.junit.Assert.fail(Assert.java:89) at 
> org.junit.Assert.assertTrue(Assert.java:42) at 
> kafka.api.QuotaTestClients.verifyThrottleTimeMetric(BaseQuotaTest.scala:229) 
> at kafka.api.QuotaTestClients.verifyConsumeThrottle(BaseQuotaTest.scala:221) 
> at 
> kafka.api.BaseQuotaTest.testQuotaOverrideDelete(BaseQuotaTest.scala:130){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8107) Flaky Test kafka.api.ClientIdQuotaTest.testQuotaOverrideDelete

2019-03-18 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-8107:
---
Fix Version/s: 2.3.0

> Flaky Test kafka.api.ClientIdQuotaTest.testQuotaOverrideDelete
> --
>
> Key: KAFKA-8107
> URL: https://issues.apache.org/jira/browse/KAFKA-8107
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Guozhang Wang
>Priority: Critical
> Fix For: 2.3.0
>
>
> {code}
> java.lang.AssertionError: Client with id=QuotasTestProducer-!@#$%^&*() should 
> have been throttled
>   at org.junit.Assert.fail(Assert.java:89)
>   at org.junit.Assert.assertTrue(Assert.java:42)
>   at 
> kafka.api.QuotaTestClients.verifyThrottleTimeMetric(BaseQuotaTest.scala:229)
>   at 
> kafka.api.QuotaTestClients.verifyProduceThrottle(BaseQuotaTest.scala:215)
>   at 
> kafka.api.BaseQuotaTest.testQuotaOverrideDelete(BaseQuotaTest.scala:124)
> {code}
> https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/3230/testReport/junit/kafka.api/ClientIdQuotaTest/testQuotaOverrideDelete/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8107) Flaky Test kafka.api.ClientIdQuotaTest.testQuotaOverrideDelete

2019-03-18 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-8107:
---
Labels: flaky-test  (was: )

> Flaky Test kafka.api.ClientIdQuotaTest.testQuotaOverrideDelete
> --
>
> Key: KAFKA-8107
> URL: https://issues.apache.org/jira/browse/KAFKA-8107
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.3.0
>Reporter: Guozhang Wang
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> {code}
> java.lang.AssertionError: Client with id=QuotasTestProducer-!@#$%^&*() should 
> have been throttled
>   at org.junit.Assert.fail(Assert.java:89)
>   at org.junit.Assert.assertTrue(Assert.java:42)
>   at 
> kafka.api.QuotaTestClients.verifyThrottleTimeMetric(BaseQuotaTest.scala:229)
>   at 
> kafka.api.QuotaTestClients.verifyProduceThrottle(BaseQuotaTest.scala:215)
>   at 
> kafka.api.BaseQuotaTest.testQuotaOverrideDelete(BaseQuotaTest.scala:124)
> {code}
> https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/3230/testReport/junit/kafka.api/ClientIdQuotaTest/testQuotaOverrideDelete/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8107) Flaky Test kafka.api.ClientIdQuotaTest.testQuotaOverrideDelete

2019-03-18 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-8107:
---
Component/s: unit tests
 core

> Flaky Test kafka.api.ClientIdQuotaTest.testQuotaOverrideDelete
> --
>
> Key: KAFKA-8107
> URL: https://issues.apache.org/jira/browse/KAFKA-8107
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.3.0
>Reporter: Guozhang Wang
>Priority: Critical
> Fix For: 2.3.0
>
>
> {code}
> java.lang.AssertionError: Client with id=QuotasTestProducer-!@#$%^&*() should 
> have been throttled
>   at org.junit.Assert.fail(Assert.java:89)
>   at org.junit.Assert.assertTrue(Assert.java:42)
>   at 
> kafka.api.QuotaTestClients.verifyThrottleTimeMetric(BaseQuotaTest.scala:229)
>   at 
> kafka.api.QuotaTestClients.verifyProduceThrottle(BaseQuotaTest.scala:215)
>   at 
> kafka.api.BaseQuotaTest.testQuotaOverrideDelete(BaseQuotaTest.scala:124)
> {code}
> https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/3230/testReport/junit/kafka.api/ClientIdQuotaTest/testQuotaOverrideDelete/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8107) Flaky Test kafka.api.ClientIdQuotaTest.testQuotaOverrideDelete

2019-03-18 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-8107:
---
Affects Version/s: 2.3.0

> Flaky Test kafka.api.ClientIdQuotaTest.testQuotaOverrideDelete
> --
>
> Key: KAFKA-8107
> URL: https://issues.apache.org/jira/browse/KAFKA-8107
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Guozhang Wang
>Priority: Critical
>
> {code}
> java.lang.AssertionError: Client with id=QuotasTestProducer-!@#$%^&*() should 
> have been throttled
>   at org.junit.Assert.fail(Assert.java:89)
>   at org.junit.Assert.assertTrue(Assert.java:42)
>   at 
> kafka.api.QuotaTestClients.verifyThrottleTimeMetric(BaseQuotaTest.scala:229)
>   at 
> kafka.api.QuotaTestClients.verifyProduceThrottle(BaseQuotaTest.scala:215)
>   at 
> kafka.api.BaseQuotaTest.testQuotaOverrideDelete(BaseQuotaTest.scala:124)
> {code}
> https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/3230/testReport/junit/kafka.api/ClientIdQuotaTest/testQuotaOverrideDelete/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8123) Flaky Test RequestQuotaTest#testResponseThrottleTimeWhenBothProduceAndRequestQuotasViolated

2019-03-18 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-8123:
--

 Summary: Flaky Test 
RequestQuotaTest#testResponseThrottleTimeWhenBothProduceAndRequestQuotasViolated
 
 Key: KAFKA-8123
 URL: https://issues.apache.org/jira/browse/KAFKA-8123
 Project: Kafka
  Issue Type: Bug
  Components: core, unit tests
Affects Versions: 2.3.0
Reporter: Matthias J. Sax
 Fix For: 2.3.0


[https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/3474/tests]
{quote}java.util.concurrent.ExecutionException: java.lang.AssertionError: 
Throttle time metrics for produce quota not updated: Client 
small-quota-producer-client apiKey PRODUCE requests 1 requestTime 
0.015790873650539786 throttleTime 1000.0
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:206)
at 
kafka.server.RequestQuotaTest$$anonfun$waitAndCheckResults$1.apply(RequestQuotaTest.scala:423)
at 
kafka.server.RequestQuotaTest$$anonfun$waitAndCheckResults$1.apply(RequestQuotaTest.scala:421)
at scala.collection.immutable.List.foreach(List.scala:392)
at 
scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35)
at scala.collection.mutable.ListBuffer.foreach(ListBuffer.scala:45)
at kafka.server.RequestQuotaTest.waitAndCheckResults(RequestQuotaTest.scala:421)
at 
kafka.server.RequestQuotaTest.testResponseThrottleTimeWhenBothProduceAndRequestQuotasViolated(RequestQuotaTest.scala:130){quote}
STDOUT
{quote}[2019-03-18 21:42:16,637] ERROR [KafkaApi-0] Error when handling 
request: clientId=unauthorized-CONTROLLED_SHUTDOWN, correlationId=1, 
api=CONTROLLED_SHUTDOWN, body=\{broker_id=0,broker_epoch=9223372036854775807} 
(kafka.server.KafkaApis:76)
org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
Request(processor=2, connectionId=127.0.0.1:42118-127.0.0.1:47612-1, 
session=Session(User:Unauthorized,/127.0.0.1), 
listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, buffer=null) 
is not authorized.
[2019-03-18 21:42:16,655] ERROR [KafkaApi-0] Error when handling request: 
clientId=unauthorized-STOP_REPLICA, correlationId=1, api=STOP_REPLICA, 
body=\{controller_id=0,controller_epoch=2147483647,broker_epoch=9223372036854775807,delete_partitions=true,partitions=[{topic=topic-1,partition_ids=[0]}]}
 (kafka.server.KafkaApis:76)
org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
Request(processor=0, connectionId=127.0.0.1:42118-127.0.0.1:47614-2, 
session=Session(User:Unauthorized,/127.0.0.1), 
listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, buffer=null) 
is not authorized.
[2019-03-18 21:42:16,657] ERROR [KafkaApi-0] Error when handling request: 
clientId=unauthorized-LEADER_AND_ISR, correlationId=1, api=LEADER_AND_ISR, 
body=\{controller_id=0,controller_epoch=2147483647,broker_epoch=9223372036854775807,topic_states=[{topic=topic-1,partition_states=[{partition=0,controller_epoch=2147483647,leader=0,leader_epoch=2147483647,isr=[0],zk_version=2,replicas=[0],is_new=true}]}],live_leaders=[\{id=0,host=localhost,port=0}]}
 (kafka.server.KafkaApis:76)
org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
Request(processor=1, connectionId=127.0.0.1:42118-127.0.0.1:47616-2, 
session=Session(User:Unauthorized,/127.0.0.1), 
listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, buffer=null) 
is not authorized.
[2019-03-18 21:42:16,668] ERROR [KafkaApi-0] Error when handling request: 
clientId=unauthorized-UPDATE_METADATA, correlationId=1, api=UPDATE_METADATA, 
body=\{controller_id=0,controller_epoch=2147483647,broker_epoch=9223372036854775807,topic_states=[{topic=topic-1,partition_states=[{partition=0,controller_epoch=2147483647,leader=0,leader_epoch=2147483647,isr=[0],zk_version=2,replicas=[0],offline_replicas=[]}]}],live_brokers=[\{id=0,end_points=[{port=0,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
 (kafka.server.KafkaApis:76)
org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
Request(processor=2, connectionId=127.0.0.1:42118-127.0.0.1:47618-2, 
session=Session(User:Unauthorized,/127.0.0.1), 
listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, buffer=null) 
is not authorized.
[2019-03-18 21:42:16,725] ERROR [KafkaApi-0] Error when handling request: 
clientId=unauthorized-STOP_REPLICA, correlationId=2, api=STOP_REPLICA, 
body=\{controller_id=0,controller_epoch=2147483647,broker_epoch=9223372036854775807,delete_partitions=true,partitions=[{topic=topic-1,partition_ids=[0]}]}
 (kafka.server.KafkaApis:76)
org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
Request(processor=0, connectionId=127.0.0.1:42118-127.0.0.1:47614-2, 
session=Session(User:Unauthorized,/127.0.0.1), 
listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, buffer=null) 
is not authorized.

[jira] [Commented] (KAFKA-8031) Flaky Test UserClientIdQuotaTest#testQuotaOverrideDelete

2019-03-18 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16795461#comment-16795461
 ] 

Matthias J. Sax commented on KAFKA-8031:


Failed again: 
[https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk11/detail/kafka-trunk-jdk11/382/tests]

> Flaky Test UserClientIdQuotaTest#testQuotaOverrideDelete
> 
>
> Key: KAFKA-8031
> URL: https://issues.apache.org/jira/browse/KAFKA-8031
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/2830/testReport/junit/kafka.api/UserClientIdQuotaTest/testQuotaOverrideDelete/]
> {quote}java.lang.AssertionError: Client with id=QuotasTestConsumer-!@#$%^&*() 
> should have been throttled at org.junit.Assert.fail(Assert.java:89) at 
> org.junit.Assert.assertTrue(Assert.java:42) at 
> kafka.api.QuotaTestClients.verifyThrottleTimeMetric(BaseQuotaTest.scala:229) 
> at kafka.api.QuotaTestClients.verifyConsumeThrottle(BaseQuotaTest.scala:221) 
> at 
> kafka.api.BaseQuotaTest.testQuotaOverrideDelete(BaseQuotaTest.scala:130){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7965) Flaky Test ConsumerBounceTest#testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup

2019-03-18 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16795460#comment-16795460
 ] 

Matthias J. Sax commented on KAFKA-7965:


[https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk11/detail/kafka-trunk-jdk11/382/tests]

> Flaky Test 
> ConsumerBounceTest#testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup
> 
>
> Key: KAFKA-7965
> URL: https://issues.apache.org/jira/browse/KAFKA-7965
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer, unit tests
>Affects Versions: 2.2.0, 2.3.0
>Reporter: Matthias J. Sax
>Assignee: Stanislav Kozlovski
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.1
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/21/]
> {quote}java.lang.AssertionError: Received 0, expected at least 68 at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.assertTrue(Assert.java:41) at 
> kafka.api.ConsumerBounceTest.receiveAndCommit(ConsumerBounceTest.scala:557) 
> at 
> kafka.api.ConsumerBounceTest.$anonfun$testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup$1(ConsumerBounceTest.scala:320)
>  at 
> kafka.api.ConsumerBounceTest.$anonfun$testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup$1$adapted(ConsumerBounceTest.scala:319)
>  at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) 
> at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) 
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at 
> kafka.api.ConsumerBounceTest.testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup(ConsumerBounceTest.scala:319){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6078) Investigate failure of ReassignPartitionsClusterTest.shouldExpandCluster

2019-03-19 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16796420#comment-16796420
 ] 

Matthias J. Sax commented on KAFKA-6078:


Failed again: 
[https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/3475/tests]
{quote}java.lang.AssertionError: Partition should have been moved to the 
expected log directory on broker 100
at kafka.utils.TestUtils$.fail(TestUtils.scala:381)
at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791)
at 
kafka.admin.ReassignPartitionsClusterTest.shouldExpandCluster(ReassignPartitionsClusterTest.scala:190){quote}
STDOUT
{quote}2019-03-19 01:20:43,339] ERROR [ReplicaFetcher replicaId=101, 
leaderId=100, fetcherId=0] Error for partition my-topic-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
Current partition replica assignment
 
{"version":1,"partitions":[\{"topic":"my-topic","partition":0,"replicas":[100,101],"log_dirs":["any","any"]}]}
 
Save this to use as the --reassignment-json-file option during rollback
Warning: You must run Verify periodically, until the reassignment completes, to 
ensure the throttle is removed. You can also alter the throttle by rerunning 
the Execute command passing a new value.
The inter-broker throttle limit was set to 1000 B/s
Successfully started reassignment of partitions.
Current partition replica assignment
 
{"version":1,"partitions":[\{"topic":"my-topic","partition":0,"replicas":[100],"log_dirs":["any"]}]}
 
Save this to use as the --reassignment-json-file option during rollback
Successfully started reassignment of partitions.
[2019-03-19 01:20:58,605] ERROR [ReplicaFetcher replicaId=2, leaderId=1, 
fetcherId=0] Error for partition orders-1 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-19 01:21:05,636] ERROR [Controller id=0] Ignoring request to reassign 
partition customers-0 that doesn't exist. (kafka.controller.KafkaController:74)
[2019-03-19 01:21:11,383] ERROR [ReplicaFetcher replicaId=104, leaderId=103, 
fetcherId=0] Error for partition topic1-2 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-19 01:21:11,390] ERROR [ReplicaFetcher replicaId=101, leaderId=100, 
fetcherId=0] Error for partition topic1-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-19 01:21:11,390] ERROR [ReplicaFetcher replicaId=101, leaderId=100, 
fetcherId=0] Error for partition topic1-1 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-19 01:21:11,801] ERROR [ReplicaFetcher replicaId=105, leaderId=104, 
fetcherId=0] Error for partition topic2-1 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-19 01:21:11,801] ERROR [ReplicaFetcher replicaId=105, leaderId=104, 
fetcherId=0] Error for partition topic2-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
Current partition replica assignment
 
{"version":1,"partitions":[\{"topic":"topic1","partition":2,"replicas":[103,104],"log_dirs":["any","any"]},\{"topic":"topic2","partition":0,"replicas":[104,105],"log_dirs":["any","any"]},\{"topic":"topic2","partition":1,"replicas":[104,105],"log_dirs":["any","any"]},\{"topic":"topic1","partition":0,"replicas":[100,101],"log_dirs":["any","any"]},\{"topic":"topic1","partition":1,"replicas":[100,101],"log_dirs":["any","any"]},\{"topic":"topic2","partition":2,"replicas":[103,104],"log_dirs":["any","any"]}]}
 
Save this to use as the --reassignment-json-file option during rollback
Warning: You must run Verify periodically, until the reassignment completes, to 
ensure the throttle is removed. You can also alter the throttle by rerunning 
the Execute command passing a new value.
The inter-broker throttle limit was set to 100 B/s
Successfully started reassignment of partitions.
[2019-03-19 01:21:18,518] ERROR [ReplicaFetcher replicaId=101, leaderId=100, 
fetcherId=0] Error for partition my-topic-1 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-19 01:21:18,518] ERROR [ReplicaFetcher replicaId=101, leaderId=100, 
fetcherId=0] Error for 

[jira] [Updated] (KAFKA-6078) Investigate failure of ReassignPartitionsClusterTest.shouldExpandCluster

2019-03-19 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-6078:
---
Fix Version/s: 2.3.0

> Investigate failure of ReassignPartitionsClusterTest.shouldExpandCluster
> 
>
> Key: KAFKA-6078
> URL: https://issues.apache.org/jira/browse/KAFKA-6078
> Project: Kafka
>  Issue Type: Bug
>  Components: unit tests
>Affects Versions: 2.3.0
>Reporter: Dong Lin
>Priority: Critical
> Fix For: 2.3.0
>
>
> See https://github.com/apache/kafka/pull/4084



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6078) Investigate failure of ReassignPartitionsClusterTest.shouldExpandCluster

2019-03-19 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-6078:
---
Affects Version/s: 2.3.0

> Investigate failure of ReassignPartitionsClusterTest.shouldExpandCluster
> 
>
> Key: KAFKA-6078
> URL: https://issues.apache.org/jira/browse/KAFKA-6078
> Project: Kafka
>  Issue Type: Bug
>  Components: unit tests
>Affects Versions: 2.3.0
>Reporter: Dong Lin
>Priority: Major
>
> See https://github.com/apache/kafka/pull/4084



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6078) Investigate failure of ReassignPartitionsClusterTest.shouldExpandCluster

2019-03-19 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-6078:
---
Priority: Critical  (was: Major)

> Investigate failure of ReassignPartitionsClusterTest.shouldExpandCluster
> 
>
> Key: KAFKA-6078
> URL: https://issues.apache.org/jira/browse/KAFKA-6078
> Project: Kafka
>  Issue Type: Bug
>  Components: unit tests
>Affects Versions: 2.3.0
>Reporter: Dong Lin
>Priority: Critical
>
> See https://github.com/apache/kafka/pull/4084



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6078) Investigate failure of ReassignPartitionsClusterTest.shouldExpandCluster

2019-03-19 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-6078:
---
Component/s: core

> Investigate failure of ReassignPartitionsClusterTest.shouldExpandCluster
> 
>
> Key: KAFKA-6078
> URL: https://issues.apache.org/jira/browse/KAFKA-6078
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.3.0
>Reporter: Dong Lin
>Priority: Critical
> Fix For: 2.3.0
>
>
> See https://github.com/apache/kafka/pull/4084



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6078) Investigate failure of ReassignPartitionsClusterTest.shouldExpandCluster

2019-03-19 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-6078:
---
Labels: flaky-test  (was: )

> Investigate failure of ReassignPartitionsClusterTest.shouldExpandCluster
> 
>
> Key: KAFKA-6078
> URL: https://issues.apache.org/jira/browse/KAFKA-6078
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.3.0
>Reporter: Dong Lin
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> See https://github.com/apache/kafka/pull/4084



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8063) Flaky Test WorkerTest#testConverterOverrides

2019-03-19 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16796421#comment-16796421
 ] 

Matthias J. Sax commented on KAFKA-8063:


Failed again: 
[https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/3476/tests]
{quote}java.lang.AssertionError:
Expectation failure on verify:
WorkerSourceTask.run(): expected: 1, actual: 0{quote}
STDOUT
{quote}[2019-03-19 08:03:25,575] (Test worker) ERROR Failed to start connector 
test-connector (org.apache.kafka.connect.runtime.Worker:234)
org.apache.kafka.connect.errors.ConnectException: Failed to find Connector
at 
org.easymock.internal.MockInvocationHandler.invoke(MockInvocationHandler.java:46)
at 
org.easymock.internal.ObjectMethodsFilter.invoke(ObjectMethodsFilter.java:101)
at 
org.easymock.internal.ClassProxyFactory$MockMethodInterceptor.intercept(ClassProxyFactory.java:97)
at 
org.apache.kafka.connect.runtime.isolation.Plugins$$EnhancerByCGLIB$$205db954.newConnector()
at org.apache.kafka.connect.runtime.Worker.startConnector(Worker.java:226)
at 
org.apache.kafka.connect.runtime.WorkerTest.testStartConnectorFailure(WorkerTest.java:256)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.internal.runners.TestMethod.invoke(TestMethod.java:68)
at 
org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:326)
at org.junit.internal.runners.MethodRoadie$2.run(MethodRoadie.java:89)
at 
org.junit.internal.runners.MethodRoadie.runBeforesThenTestThenAfters(MethodRoadie.java:97)
at 
org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.executeTest(PowerMockJUnit44RunnerDelegateImpl.java:310)
at 
org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.executeTestInSuper(PowerMockJUnit47RunnerDelegateImpl.java:131)
at 
org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.access$100(PowerMockJUnit47RunnerDelegateImpl.java:59)
at 
org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner$TestExecutorStatement.evaluate(PowerMockJUnit47RunnerDelegateImpl.java:147)
at 
org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.evaluateStatement(PowerMockJUnit47RunnerDelegateImpl.java:107)
at 
org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.executeTest(PowerMockJUnit47RunnerDelegateImpl.java:82)
at 
org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runBeforesThenTestThenAfters(PowerMockJUnit44RunnerDelegateImpl.java:298)
at org.junit.internal.runners.MethodRoadie.runTest(MethodRoadie.java:87)
at org.junit.internal.runners.MethodRoadie.run(MethodRoadie.java:50)
at 
org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.invokeTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:218)
at 
org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.runMethods(PowerMockJUnit44RunnerDelegateImpl.java:160)
at 
org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$1.run(PowerMockJUnit44RunnerDelegateImpl.java:134)
at org.junit.internal.runners.ClassRoadie.runUnprotected(ClassRoadie.java:34)
at org.junit.internal.runners.ClassRoadie.runProtected(ClassRoadie.java:44)
at 
org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.run(PowerMockJUnit44RunnerDelegateImpl.java:136)
at 
org.powermock.modules.junit4.common.internal.impl.JUnit4TestSuiteChunkerImpl.run(JUnit4TestSuiteChunkerImpl.java:117)
at 
org.powermock.modules.junit4.common.internal.impl.AbstractCommonPowerMockRunner.run(AbstractCommonPowerMockRunner.java:57)
at org.powermock.modules.junit4.PowerMockRunner.run(PowerMockRunner.java:59)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
at 
org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at

[jira] [Commented] (KAFKA-7937) Flaky Test ResetConsumerGroupOffsetTest.testResetOffsetsNotExistingGroup

2019-03-19 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16796424#comment-16796424
 ] 

Matthias J. Sax commented on KAFKA-7937:


Failed again: 
[https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/76/testReport/junit/kafka.admin/ResetConsumerGroupOffsetTest/testResetOffsetsNotExistingGroup/]

> Flaky Test ResetConsumerGroupOffsetTest.testResetOffsetsNotExistingGroup
> 
>
> Key: KAFKA-7937
> URL: https://issues.apache.org/jira/browse/KAFKA-7937
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, clients, unit tests
>Affects Versions: 2.2.0, 2.3.0
>Reporter: Matthias J. Sax
>Assignee: Gwen Shapira
>Priority: Critical
> Fix For: 2.3.0, 2.2.1
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> https://builds.apache.org/blue/organizations/jenkins/kafka-2.2-jdk8/detail/kafka-2.2-jdk8/19/pipeline
> {quote}kafka.admin.ResetConsumerGroupOffsetTest > 
> testResetOffsetsNotExistingGroup FAILED 
> java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.CoordinatorNotAvailableException: The 
> coordinator is not available. at 
> org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
>  at 
> kafka.admin.ConsumerGroupCommand$ConsumerGroupService.resetOffsets(ConsumerGroupCommand.scala:306)
>  at 
> kafka.admin.ResetConsumerGroupOffsetTest.testResetOffsetsNotExistingGroup(ResetConsumerGroupOffsetTest.scala:89)
>  Caused by: org.apache.kafka.common.errors.CoordinatorNotAvailableException: 
> The coordinator is not available.{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7989) Flaky Test RequestQuotaTest#testResponseThrottleTimeWhenBothFetchAndRequestQuotasViolated

2019-03-19 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16796425#comment-16796425
 ] 

Matthias J. Sax commented on KAFKA-7989:


Failed again: 
[https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/76/testReport/junit/kafka.server/RequestQuotaTest/testResponseThrottleTimeWhenBothFetchAndRequestQuotasViolated/]

> Flaky Test 
> RequestQuotaTest#testResponseThrottleTimeWhenBothFetchAndRequestQuotasViolated
> -
>
> Key: KAFKA-7989
> URL: https://issues.apache.org/jira/browse/KAFKA-7989
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Assignee: Anna Povzner
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.1
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/27/]
> {quote}java.util.concurrent.ExecutionException: java.lang.AssertionError: 
> Throttle time metrics for consumer quota not updated: 
> small-quota-consumer-client at 
> java.util.concurrent.FutureTask.report(FutureTask.java:122) at 
> java.util.concurrent.FutureTask.get(FutureTask.java:206) at 
> kafka.server.RequestQuotaTest.$anonfun$waitAndCheckResults$1(RequestQuotaTest.scala:415)
>  at scala.collection.immutable.List.foreach(List.scala:392) at 
> scala.collection.generic.TraversableForwarder.foreach(TraversableForwarder.scala:38)
>  at 
> scala.collection.generic.TraversableForwarder.foreach$(TraversableForwarder.scala:38)
>  at scala.collection.mutable.ListBuffer.foreach(ListBuffer.scala:47) at 
> kafka.server.RequestQuotaTest.waitAndCheckResults(RequestQuotaTest.scala:413) 
> at 
> kafka.server.RequestQuotaTest.testResponseThrottleTimeWhenBothFetchAndRequestQuotasViolated(RequestQuotaTest.scala:134){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8123) Flaky Test RequestQuotaTest#testResponseThrottleTimeWhenBothProduceAndRequestQuotasViolated

2019-03-19 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16796426#comment-16796426
 ] 

Matthias J. Sax commented on KAFKA-8123:


Failed again: 
[https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/76/testReport/junit/kafka.server/RequestQuotaTest/testResponseThrottleTimeWhenBothProduceAndRequestQuotasViolated/]

> Flaky Test 
> RequestQuotaTest#testResponseThrottleTimeWhenBothProduceAndRequestQuotasViolated
>  
> 
>
> Key: KAFKA-8123
> URL: https://issues.apache.org/jira/browse/KAFKA-8123
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Assignee: Anna Povzner
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/3474/tests]
> {quote}java.util.concurrent.ExecutionException: java.lang.AssertionError: 
> Throttle time metrics for produce quota not updated: Client 
> small-quota-producer-client apiKey PRODUCE requests 1 requestTime 
> 0.015790873650539786 throttleTime 1000.0
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.util.concurrent.FutureTask.get(FutureTask.java:206)
> at 
> kafka.server.RequestQuotaTest$$anonfun$waitAndCheckResults$1.apply(RequestQuotaTest.scala:423)
> at 
> kafka.server.RequestQuotaTest$$anonfun$waitAndCheckResults$1.apply(RequestQuotaTest.scala:421)
> at scala.collection.immutable.List.foreach(List.scala:392)
> at 
> scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35)
> at scala.collection.mutable.ListBuffer.foreach(ListBuffer.scala:45)
> at 
> kafka.server.RequestQuotaTest.waitAndCheckResults(RequestQuotaTest.scala:421)
> at 
> kafka.server.RequestQuotaTest.testResponseThrottleTimeWhenBothProduceAndRequestQuotasViolated(RequestQuotaTest.scala:130){quote}
> STDOUT
> {quote}[2019-03-18 21:42:16,637] ERROR [KafkaApi-0] Error when handling 
> request: clientId=unauthorized-CONTROLLED_SHUTDOWN, correlationId=1, 
> api=CONTROLLED_SHUTDOWN, body=\{broker_id=0,broker_epoch=9223372036854775807} 
> (kafka.server.KafkaApis:76)
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=2, connectionId=127.0.0.1:42118-127.0.0.1:47612-1, 
> session=Session(User:Unauthorized,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized.
> [2019-03-18 21:42:16,655] ERROR [KafkaApi-0] Error when handling request: 
> clientId=unauthorized-STOP_REPLICA, correlationId=1, api=STOP_REPLICA, 
> body=\{controller_id=0,controller_epoch=2147483647,broker_epoch=9223372036854775807,delete_partitions=true,partitions=[{topic=topic-1,partition_ids=[0]}]}
>  (kafka.server.KafkaApis:76)
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:42118-127.0.0.1:47614-2, 
> session=Session(User:Unauthorized,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized.
> [2019-03-18 21:42:16,657] ERROR [KafkaApi-0] Error when handling request: 
> clientId=unauthorized-LEADER_AND_ISR, correlationId=1, api=LEADER_AND_ISR, 
> body=\{controller_id=0,controller_epoch=2147483647,broker_epoch=9223372036854775807,topic_states=[{topic=topic-1,partition_states=[{partition=0,controller_epoch=2147483647,leader=0,leader_epoch=2147483647,isr=[0],zk_version=2,replicas=[0],is_new=true}]}],live_leaders=[\{id=0,host=localhost,port=0}]}
>  (kafka.server.KafkaApis:76)
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=1, connectionId=127.0.0.1:42118-127.0.0.1:47616-2, 
> session=Session(User:Unauthorized,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized.
> [2019-03-18 21:42:16,668] ERROR [KafkaApi-0] Error when handling request: 
> clientId=unauthorized-UPDATE_METADATA, correlationId=1, api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=2147483647,broker_epoch=9223372036854775807,topic_states=[{topic=topic-1,partition_states=[{partition=0,controller_epoch=2147483647,leader=0,leader_epoch=2147483647,isr=[0],zk_version=2,replicas=[0],offline_replicas=[]}]}],live_brokers=[\{id=0,end_points=[{port=0,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76)
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=2, connectionId=127.0.0.1:42118-127.0.0.1:47618-2, 
> session=Session(User:Unauthorized,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), s

[jira] [Updated] (KAFKA-8094) Iterating over cache with get(key) is inefficient

2019-03-19 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-8094:
---
Fix Version/s: 2.3.0

> Iterating over cache with get(key) is inefficient 
> --
>
> Key: KAFKA-8094
> URL: https://issues.apache.org/jira/browse/KAFKA-8094
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Sophie Blee-Goldman
>Assignee: Sophie Blee-Goldman
>Priority: Major
>  Labels: streams
> Fix For: 2.3.0
>
>
> Currently, range queries in the caching layer are implemented by creating an 
> iterator over the subset of keys in the range, and calling get() on the 
> underlying TreeMap for each key. While this protects against 
> ConcurrentModificationException, we can improve performance by replacing the 
> TreeMap with a concurrent data structure such as ConcurrentSkipListMap and 
> then just iterating over a subMap.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8062) StateListener is not notified when StreamThread dies

2019-03-19 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-8062:
---
Fix Version/s: 2.2.1
   2.3.0

> StateListener is not notified when StreamThread dies
> 
>
> Key: KAFKA-8062
> URL: https://issues.apache.org/jira/browse/KAFKA-8062
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.1
> Environment: Kafka 2.1.1, kafka-streams-scala version 2.1.1
>Reporter: Andrey Volkov
>Assignee: Guozhang Wang
>Priority: Minor
> Fix For: 2.3.0, 2.2.1
>
>
> I want my application to react when streams die. Trying to use 
> KafkaStreams.setStateListener. Also checking KafkaStreams.state() from time 
> to time.
> The test scenario: Kafka is available, but there are no topics that my 
> Topology is supposed to use.
> I expect streams to crash and the state listener to be notified about that, 
> with the new state ERROR. KafkaStreams.state() should also return ERROR.
> In reality the streams crash, but the KafkaStreams.state() method always 
> returns REBALANCING and the last time the StateListener was called, the new 
> state was also REBALANCING. 
>  
> I believe the reason for this is in the methods:
> org.apache.kafka.streams.KafkaStreams.StreamStateListener.onChange() which 
> does not react on the state StreamsThread.State.PENDING_SHUTDOWN
> and
> org.apache.kafka.streams.processor.internals.StreamThread.RebalanceListener.onPartitionsAssigned,
>  which calls shutdown() setting the state to PENDING_SHUTDOWN and then
> streamThread.setStateListener(null) effectively removing the state listener, 
> so that the DEAD state of the thread never reaches KafkaStreams object.
> Here is an extract from the logs:
> {{14:57:03.272 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> ERROR o.a.k.s.p.i.StreamsPartitionAssignor - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-consumer] 
> test-input-topic is unknown yet during rebalance, please make sure they have 
> been pre-created before starting the Streams application.}}
> {{14:57:03.283 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.c.c.i.AbstractCoordinator - [Consumer 
> clientId=Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-consumer, 
> groupId=Test] Successfully joined group with generation 1}}
> {{14:57:03.284 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.c.c.i.ConsumerCoordinator - [Consumer 
> clientId=Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-consumer, 
> groupId=Test] Setting newly assigned partitions []}}
> {{14:57:03.285 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.s.p.i.StreamThread - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] Informed to shut 
> down}}
> {{14:57:03.285 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.s.p.i.StreamThread - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] State transition 
> from PARTITIONS_REVOKED to PENDING_SHUTDOWN}}
> {{14:57:03.316 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.s.p.i.StreamThread - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] Shutting down}}
> {{14:57:03.317 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.c.c.KafkaConsumer - [Consumer 
> clientId=Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-restore-consumer,
>  groupId=] Unsubscribed all topics or patterns and assigned partitions}}
> {{14:57:03.317 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.c.p.KafkaProducer - [Producer 
> clientId=Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-producer] 
> Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms.}}
> {{14:57:03.332 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.s.p.i.StreamThread - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] State transition 
> from PENDING_SHUTDOWN to DEAD}}
> {{14:57:03.332 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.s.p.i.StreamThread - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] Shutdown complete}}
> After this calls to KafkaStreams.state() still return REBALANCING
> There is a workaround with requesting KafkaStreams.localThreadsMetadata() and 
> checking each thread's state manually, but that seems very wrong.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7989) Flaky Test RequestQuotaTest#testResponseThrottleTimeWhenBothFetchAndRequestQuotasViolated

2019-03-19 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16796580#comment-16796580
 ] 

Matthias J. Sax commented on KAFKA-7989:


[https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/3477/tests]

> Flaky Test 
> RequestQuotaTest#testResponseThrottleTimeWhenBothFetchAndRequestQuotasViolated
> -
>
> Key: KAFKA-7989
> URL: https://issues.apache.org/jira/browse/KAFKA-7989
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Assignee: Anna Povzner
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.1
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/27/]
> {quote}java.util.concurrent.ExecutionException: java.lang.AssertionError: 
> Throttle time metrics for consumer quota not updated: 
> small-quota-consumer-client at 
> java.util.concurrent.FutureTask.report(FutureTask.java:122) at 
> java.util.concurrent.FutureTask.get(FutureTask.java:206) at 
> kafka.server.RequestQuotaTest.$anonfun$waitAndCheckResults$1(RequestQuotaTest.scala:415)
>  at scala.collection.immutable.List.foreach(List.scala:392) at 
> scala.collection.generic.TraversableForwarder.foreach(TraversableForwarder.scala:38)
>  at 
> scala.collection.generic.TraversableForwarder.foreach$(TraversableForwarder.scala:38)
>  at scala.collection.mutable.ListBuffer.foreach(ListBuffer.scala:47) at 
> kafka.server.RequestQuotaTest.waitAndCheckResults(RequestQuotaTest.scala:413) 
> at 
> kafka.server.RequestQuotaTest.testResponseThrottleTimeWhenBothFetchAndRequestQuotasViolated(RequestQuotaTest.scala:134){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7989) Flaky Test RequestQuotaTest#testResponseThrottleTimeWhenBothFetchAndRequestQuotasViolated

2019-03-19 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16796582#comment-16796582
 ] 

Matthias J. Sax commented on KAFKA-7989:


And again: 
[https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/77/testReport/kafka.server/RequestQuotaTest/testResponseThrottleTimeWhenBothFetchAndRequestQuotasViolated/]

> Flaky Test 
> RequestQuotaTest#testResponseThrottleTimeWhenBothFetchAndRequestQuotasViolated
> -
>
> Key: KAFKA-7989
> URL: https://issues.apache.org/jira/browse/KAFKA-7989
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Assignee: Anna Povzner
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.1
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/27/]
> {quote}java.util.concurrent.ExecutionException: java.lang.AssertionError: 
> Throttle time metrics for consumer quota not updated: 
> small-quota-consumer-client at 
> java.util.concurrent.FutureTask.report(FutureTask.java:122) at 
> java.util.concurrent.FutureTask.get(FutureTask.java:206) at 
> kafka.server.RequestQuotaTest.$anonfun$waitAndCheckResults$1(RequestQuotaTest.scala:415)
>  at scala.collection.immutable.List.foreach(List.scala:392) at 
> scala.collection.generic.TraversableForwarder.foreach(TraversableForwarder.scala:38)
>  at 
> scala.collection.generic.TraversableForwarder.foreach$(TraversableForwarder.scala:38)
>  at scala.collection.mutable.ListBuffer.foreach(ListBuffer.scala:47) at 
> kafka.server.RequestQuotaTest.waitAndCheckResults(RequestQuotaTest.scala:413) 
> at 
> kafka.server.RequestQuotaTest.testResponseThrottleTimeWhenBothFetchAndRequestQuotasViolated(RequestQuotaTest.scala:134){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7777) Decouple topic serdes from materialized serdes

2019-03-19 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16796604#comment-16796604
 ] 

Matthias J. Sax commented on KAFKA-:


[~pgwhalen] What you describe might be possible today already. Using the 
Processor API, users can implement `StateStore` interface using any internal 
storage engine they like and implement whatever change-logging mechanism they 
need. You can also expose any interface you like to query a store, by 
implementing a custom `QueryableStoreType`.

The coupling describe in this ticket, is a DSL limitation only.

> Decouple topic serdes from materialized serdes
> --
>
> Key: KAFKA-
> URL: https://issues.apache.org/jira/browse/KAFKA-
> Project: Kafka
>  Issue Type: Wish
>  Components: streams
>Reporter: Maarten
>Priority: Minor
>  Labels: needs-kip
>
> It would be valuable to us to have the the encoding format in a Kafka topic 
> decoupled from the encoding format used to cache the data locally in a kafka 
> streams app. 
> We would like to use the `range()` function in the interactive queries API to 
> look up a series of results, but can't with our encoding scheme due to our 
> keys being variable length.
> We use protobuf, but based on what I've read Avro, Flatbuffers and Cap'n 
> proto have similar problems.
> Currently we use the following code to work around this problem:
> {code}
> builder
> .stream("input-topic", Consumed.with(inputKeySerde, inputValueSerde))
> .to("intermediate-topic", Produced.with(intermediateKeySerde, 
> intermediateValueSerde)); 
> t1 = builder
> .table("intermediate-topic", Consumed.with(intermediateKeySerde, 
> intermediateValueSerde), t1Materialized);
> {code}
> With the encoding formats decoupled, the code above could be reduced to a 
> single step, not requiring an intermediate topic.
> Based on feedback on my [SO 
> question|https://stackoverflow.com/questions/53913571/is-there-a-way-to-separate-kafka-topic-serdes-from-materialized-serdes]
>  a change that introduces this would impact state restoration when using an 
> input topic for recovery.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-7928) Deprecate WindowStore.put(key, value)

2019-03-19 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax reassigned KAFKA-7928:
--

Assignee: Slim Ouertani

> Deprecate WindowStore.put(key, value)
> -
>
> Key: KAFKA-7928
> URL: https://issues.apache.org/jira/browse/KAFKA-7928
> Project: Kafka
>  Issue Type: Improvement
>Reporter: John Roesler
>Assignee: Slim Ouertani
>Priority: Major
>  Labels: beginner, easy-fix, needs-kip, newbie
>
> Specifically, `org.apache.kafka.streams.state.WindowStore#put(K, V)`
> This method is strange... A window store needs to have a timestamp associated 
> with the key, so if you do a put without a timestamp, it's up to the store to 
> just make one up.
> Even the javadoc on the method recommends not to use it, due to this 
> confusing behavior.
> We should just deprecate it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7777) Decouple topic serdes from materialized serdes

2019-03-19 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16796650#comment-16796650
 ] 

Matthias J. Sax commented on KAFKA-:


No need to apologize, you did not "hijack" this ticket :)

What do you mean by "hooking into the change-logging code already built". What 
API do you have in mind to reuse existing code? From what I can tell atm, it 
seems that it would be hard to share existing code, but I might be wrong. If 
you have a good idea, feel free to create a new ticket to follow up.

> Decouple topic serdes from materialized serdes
> --
>
> Key: KAFKA-
> URL: https://issues.apache.org/jira/browse/KAFKA-
> Project: Kafka
>  Issue Type: Wish
>  Components: streams
>Reporter: Maarten
>Priority: Minor
>  Labels: needs-kip
>
> It would be valuable to us to have the the encoding format in a Kafka topic 
> decoupled from the encoding format used to cache the data locally in a kafka 
> streams app. 
> We would like to use the `range()` function in the interactive queries API to 
> look up a series of results, but can't with our encoding scheme due to our 
> keys being variable length.
> We use protobuf, but based on what I've read Avro, Flatbuffers and Cap'n 
> proto have similar problems.
> Currently we use the following code to work around this problem:
> {code}
> builder
> .stream("input-topic", Consumed.with(inputKeySerde, inputValueSerde))
> .to("intermediate-topic", Produced.with(intermediateKeySerde, 
> intermediateValueSerde)); 
> t1 = builder
> .table("intermediate-topic", Consumed.with(intermediateKeySerde, 
> intermediateValueSerde), t1Materialized);
> {code}
> With the encoding formats decoupled, the code above could be reduced to a 
> single step, not requiring an intermediate topic.
> Based on feedback on my [SO 
> question|https://stackoverflow.com/questions/53913571/is-there-a-way-to-separate-kafka-topic-serdes-from-materialized-serdes]
>  a change that introduces this would impact state restoration when using an 
> input topic for recovery.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8132) Flaky Test MirrorMakerIntegrationTest #testCommaSeparatedRegex

2019-03-20 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-8132:
--

 Summary: Flaky Test MirrorMakerIntegrationTest 
#testCommaSeparatedRegex
 Key: KAFKA-8132
 URL: https://issues.apache.org/jira/browse/KAFKA-8132
 Project: Kafka
  Issue Type: Bug
  Components: mirrormaker, unit tests
Affects Versions: 2.1.1
Reporter: Matthias J. Sax
 Fix For: 2.3.0, 2.1.2, 2.2.1


[https://builds.apache.org/blue/organizations/jenkins/kafka-2.1-jdk8/detail/kafka-2.1-jdk8/150/tests]
{quote}kafka.tools.MirrorMaker$NoRecordsException
at kafka.tools.MirrorMaker$ConsumerWrapper.receive(MirrorMaker.scala:483)
at 
kafka.tools.MirrorMakerIntegrationTest$$anonfun$testCommaSeparatedRegex$1.apply$mcZ$sp(MirrorMakerIntegrationTest.scala:92)
at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:742)
at 
kafka.tools.MirrorMakerIntegrationTest.testCommaSeparatedRegex(MirrorMakerIntegrationTest.scala:90){quote}
STDOUT (repeatable outputs):

{quote}[2019-03-19 21:47:06,115] ERROR [Consumer clientId=consumer-1029, 
groupId=test-group] Offset commit failed on partition nonexistent-topic1-0 at 
offset 0: This server does not host this topic-partition. 
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:812){quote} 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8059) Flaky Test DynamicConnectionQuotaTest #testDynamicConnectionQuota

2019-03-20 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-8059:
---
Fix Version/s: 2.1.2
   2.3.0

> Flaky Test DynamicConnectionQuotaTest #testDynamicConnectionQuota
> -
>
> Key: KAFKA-8059
> URL: https://issues.apache.org/jira/browse/KAFKA-8059
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0, 2.1.1
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.1.2, 2.2.1
>
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-2.2-jdk8/detail/kafka-2.2-jdk8/46/tests]
> {quote}org.scalatest.junit.JUnitTestFailedError: Expected exception 
> java.io.IOException to be thrown, but no exception was thrown
> at 
> org.scalatest.junit.AssertionsForJUnit$class.newAssertionFailedException(AssertionsForJUnit.scala:100)
> at 
> org.scalatest.junit.JUnitSuite.newAssertionFailedException(JUnitSuite.scala:71)
> at org.scalatest.Assertions$class.intercept(Assertions.scala:822)
> at org.scalatest.junit.JUnitSuite.intercept(JUnitSuite.scala:71)
> at 
> kafka.network.DynamicConnectionQuotaTest.testDynamicConnectionQuota(DynamicConnectionQuotaTest.scala:82){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8059) Flaky Test DynamicConnectionQuotaTest #testDynamicConnectionQuota

2019-03-20 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16797334#comment-16797334
 ] 

Matthias J. Sax commented on KAFKA-8059:


Failed again in 2.1 this time: 
[https://builds.apache.org/blue/organizations/jenkins/kafka-2.1-jdk8/detail/kafka-2.1-jdk8/150/tests]

> Flaky Test DynamicConnectionQuotaTest #testDynamicConnectionQuota
> -
>
> Key: KAFKA-8059
> URL: https://issues.apache.org/jira/browse/KAFKA-8059
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.2.1
>
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-2.2-jdk8/detail/kafka-2.2-jdk8/46/tests]
> {quote}org.scalatest.junit.JUnitTestFailedError: Expected exception 
> java.io.IOException to be thrown, but no exception was thrown
> at 
> org.scalatest.junit.AssertionsForJUnit$class.newAssertionFailedException(AssertionsForJUnit.scala:100)
> at 
> org.scalatest.junit.JUnitSuite.newAssertionFailedException(JUnitSuite.scala:71)
> at org.scalatest.Assertions$class.intercept(Assertions.scala:822)
> at org.scalatest.junit.JUnitSuite.intercept(JUnitSuite.scala:71)
> at 
> kafka.network.DynamicConnectionQuotaTest.testDynamicConnectionQuota(DynamicConnectionQuotaTest.scala:82){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8059) Flaky Test DynamicConnectionQuotaTest #testDynamicConnectionQuota

2019-03-20 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-8059:
---
Affects Version/s: 2.1.1

> Flaky Test DynamicConnectionQuotaTest #testDynamicConnectionQuota
> -
>
> Key: KAFKA-8059
> URL: https://issues.apache.org/jira/browse/KAFKA-8059
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0, 2.1.1
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.2.1
>
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-2.2-jdk8/detail/kafka-2.2-jdk8/46/tests]
> {quote}org.scalatest.junit.JUnitTestFailedError: Expected exception 
> java.io.IOException to be thrown, but no exception was thrown
> at 
> org.scalatest.junit.AssertionsForJUnit$class.newAssertionFailedException(AssertionsForJUnit.scala:100)
> at 
> org.scalatest.junit.JUnitSuite.newAssertionFailedException(JUnitSuite.scala:71)
> at org.scalatest.Assertions$class.intercept(Assertions.scala:822)
> at org.scalatest.junit.JUnitSuite.intercept(JUnitSuite.scala:71)
> at 
> kafka.network.DynamicConnectionQuotaTest.testDynamicConnectionQuota(DynamicConnectionQuotaTest.scala:82){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8133) Flaky Test MetadataRequestTest#testNoTopicsRequest

2019-03-20 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-8133:
--

 Summary: Flaky Test MetadataRequestTest#testNoTopicsRequest
 Key: KAFKA-8133
 URL: https://issues.apache.org/jira/browse/KAFKA-8133
 Project: Kafka
  Issue Type: Bug
  Components: core, unit tests
Affects Versions: 2.1.1
Reporter: Matthias J. Sax
 Fix For: 2.3.0, 2.1.2, 2.2.1


[https://builds.apache.org/blue/organizations/jenkins/kafka-2.1-jdk8/detail/kafka-2.1-jdk8/151/tests]
{quote}org.apache.kafka.common.errors.TopicExistsException: Topic 't1' already 
exists.{quote}
STDOUT:
{quote}[2019-03-20 03:49:00,982] ERROR [ReplicaFetcher replicaId=1, leaderId=0, 
fetcherId=0] Error for partition isr-after-broker-shutdown-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-20 03:49:00,982] ERROR [ReplicaFetcher replicaId=2, leaderId=0, 
fetcherId=0] Error for partition isr-after-broker-shutdown-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-20 03:49:15,319] ERROR [ReplicaFetcher replicaId=1, leaderId=2, 
fetcherId=0] Error for partition replicaDown-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-20 03:49:15,319] ERROR [ReplicaFetcher replicaId=0, leaderId=2, 
fetcherId=0] Error for partition replicaDown-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-20 03:49:20,049] ERROR [ReplicaFetcher replicaId=0, leaderId=1, 
fetcherId=0] Error for partition testAutoCreate_Topic-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-20 03:49:27,080] ERROR [ReplicaFetcher replicaId=0, leaderId=2, 
fetcherId=0] Error for partition __consumer_offsets-1 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-20 03:49:27,080] ERROR [ReplicaFetcher replicaId=1, leaderId=0, 
fetcherId=0] Error for partition __consumer_offsets-2 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-20 03:49:27,080] ERROR [ReplicaFetcher replicaId=2, leaderId=1, 
fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-20 03:49:27,538] ERROR [ReplicaFetcher replicaId=2, leaderId=1, 
fetcherId=0] Error for partition notInternal-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-20 03:49:27,538] ERROR [ReplicaFetcher replicaId=0, leaderId=2, 
fetcherId=0] Error for partition notInternal-1 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-20 03:49:28,863] WARN Unable to read additional data from client 
sessionid 0x102fbd81b150003, likely client has closed socket 
(org.apache.zookeeper.server.NIOServerCnxn:376)
[2019-03-20 03:49:40,478] ERROR [ReplicaFetcher replicaId=2, leaderId=1, 
fetcherId=0] Error for partition t1-2 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-20 03:49:40,921] ERROR [ReplicaFetcher replicaId=0, leaderId=1, 
fetcherId=0] Error for partition t2-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-20 03:49:40,922] ERROR [ReplicaFetcher replicaId=1, leaderId=2, 
fetcherId=0] Error for partition t2-1 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-20 03:49:52,098] WARN Client session timed out, have not heard from 
server in 4002ms for sessionid 0x102fbd86a4a0002 
(org.apache.zookeeper.ClientCnxn:1112)
[2019-03-20 03:49:52,099] WARN Unable to read additional data from client 
sessionid 0x102fbd86a4a0002, likely client has closed socket 
(org.apache.zookeeper.server.NIOServerCnxn:376)
[2019-03-20 03:49:52,415] WARN C

[jira] [Commented] (KAFKA-8026) Flaky Test RegexSourceIntegrationTest#testRegexMatchesTopicsAWhenDeleted

2019-03-20 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16797328#comment-16797328
 ] 

Matthias J. Sax commented on KAFKA-8026:


[https://builds.apache.org/blue/organizations/jenkins/kafka-1.1-jdk7/detail/kafka-1.1-jdk7/253/tests]

> Flaky Test RegexSourceIntegrationTest#testRegexMatchesTopicsAWhenDeleted
> 
>
> Key: KAFKA-8026
> URL: https://issues.apache.org/jira/browse/KAFKA-8026
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Affects Versions: 1.0.2, 1.1.1
>Reporter: Matthias J. Sax
>Assignee: Bill Bejeck
>Priority: Critical
>  Labels: flaky-test
> Fix For: 1.0.3, 1.1.2
>
>
> {quote}java.lang.AssertionError: Condition not met within timeout 15000. 
> Stream tasks not updated
> at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:276)
> at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:254)
> at 
> org.apache.kafka.streams.integration.RegexSourceIntegrationTest.testRegexMatchesTopicsAWhenDeleted(RegexSourceIntegrationTest.java:215){quote}
> Happend in 1.0 and 1.1 builds:
> [https://builds.apache.org/blue/organizations/jenkins/kafka-1.0-jdk7/detail/kafka-1.0-jdk7/263/tests/]
> and
> [https://builds.apache.org/blue/organizations/jenkins/kafka-1.1-jdk7/detail/kafka-1.1-jdk7/249/tests/]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7937) Flaky Test ResetConsumerGroupOffsetTest.testResetOffsetsNotExistingGroup

2019-03-20 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-7937:
---
Fix Version/s: 2.1.2

> Flaky Test ResetConsumerGroupOffsetTest.testResetOffsetsNotExistingGroup
> 
>
> Key: KAFKA-7937
> URL: https://issues.apache.org/jira/browse/KAFKA-7937
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, clients, unit tests
>Affects Versions: 2.2.0, 2.1.1, 2.3.0
>Reporter: Matthias J. Sax
>Assignee: Gwen Shapira
>Priority: Critical
> Fix For: 2.3.0, 2.1.2, 2.2.1
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> https://builds.apache.org/blue/organizations/jenkins/kafka-2.2-jdk8/detail/kafka-2.2-jdk8/19/pipeline
> {quote}kafka.admin.ResetConsumerGroupOffsetTest > 
> testResetOffsetsNotExistingGroup FAILED 
> java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.CoordinatorNotAvailableException: The 
> coordinator is not available. at 
> org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
>  at 
> kafka.admin.ConsumerGroupCommand$ConsumerGroupService.resetOffsets(ConsumerGroupCommand.scala:306)
>  at 
> kafka.admin.ResetConsumerGroupOffsetTest.testResetOffsetsNotExistingGroup(ResetConsumerGroupOffsetTest.scala:89)
>  Caused by: org.apache.kafka.common.errors.CoordinatorNotAvailableException: 
> The coordinator is not available.{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7937) Flaky Test ResetConsumerGroupOffsetTest.testResetOffsetsNotExistingGroup

2019-03-20 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16797336#comment-16797336
 ] 

Matthias J. Sax commented on KAFKA-7937:


[https://builds.apache.org/blue/organizations/jenkins/kafka-2.1-jdk8/detail/kafka-2.1-jdk8/150/tests]

> Flaky Test ResetConsumerGroupOffsetTest.testResetOffsetsNotExistingGroup
> 
>
> Key: KAFKA-7937
> URL: https://issues.apache.org/jira/browse/KAFKA-7937
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, clients, unit tests
>Affects Versions: 2.2.0, 2.3.0
>Reporter: Matthias J. Sax
>Assignee: Gwen Shapira
>Priority: Critical
> Fix For: 2.3.0, 2.2.1
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> https://builds.apache.org/blue/organizations/jenkins/kafka-2.2-jdk8/detail/kafka-2.2-jdk8/19/pipeline
> {quote}kafka.admin.ResetConsumerGroupOffsetTest > 
> testResetOffsetsNotExistingGroup FAILED 
> java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.CoordinatorNotAvailableException: The 
> coordinator is not available. at 
> org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
>  at 
> kafka.admin.ConsumerGroupCommand$ConsumerGroupService.resetOffsets(ConsumerGroupCommand.scala:306)
>  at 
> kafka.admin.ResetConsumerGroupOffsetTest.testResetOffsetsNotExistingGroup(ResetConsumerGroupOffsetTest.scala:89)
>  Caused by: org.apache.kafka.common.errors.CoordinatorNotAvailableException: 
> The coordinator is not available.{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7937) Flaky Test ResetConsumerGroupOffsetTest.testResetOffsetsNotExistingGroup

2019-03-20 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-7937:
---
Affects Version/s: 2.1.1

> Flaky Test ResetConsumerGroupOffsetTest.testResetOffsetsNotExistingGroup
> 
>
> Key: KAFKA-7937
> URL: https://issues.apache.org/jira/browse/KAFKA-7937
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, clients, unit tests
>Affects Versions: 2.2.0, 2.1.1, 2.3.0
>Reporter: Matthias J. Sax
>Assignee: Gwen Shapira
>Priority: Critical
> Fix For: 2.3.0, 2.2.1
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> https://builds.apache.org/blue/organizations/jenkins/kafka-2.2-jdk8/detail/kafka-2.2-jdk8/19/pipeline
> {quote}kafka.admin.ResetConsumerGroupOffsetTest > 
> testResetOffsetsNotExistingGroup FAILED 
> java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.CoordinatorNotAvailableException: The 
> coordinator is not available. at 
> org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
>  at 
> kafka.admin.ConsumerGroupCommand$ConsumerGroupService.resetOffsets(ConsumerGroupCommand.scala:306)
>  at 
> kafka.admin.ResetConsumerGroupOffsetTest.testResetOffsetsNotExistingGroup(ResetConsumerGroupOffsetTest.scala:89)
>  Caused by: org.apache.kafka.common.errors.CoordinatorNotAvailableException: 
> The coordinator is not available.{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7777) Decouple topic serdes from materialized serdes

2019-03-20 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16797353#comment-16797353
 ] 

Matthias J. Sax commented on KAFKA-:


Sound like a fair request – the DSL changelogging mechanism is not part of 
public API (and custom stores are not well documented in general :() and it 
would be helpful to design a proper and easy to use public API for the 
Processor API. If you are interested to work on this, feel free to create a new 
ticket – note, this change will require a KIP.

> Decouple topic serdes from materialized serdes
> --
>
> Key: KAFKA-
> URL: https://issues.apache.org/jira/browse/KAFKA-
> Project: Kafka
>  Issue Type: Wish
>  Components: streams
>Reporter: Maarten
>Priority: Minor
>  Labels: needs-kip
>
> It would be valuable to us to have the the encoding format in a Kafka topic 
> decoupled from the encoding format used to cache the data locally in a kafka 
> streams app. 
> We would like to use the `range()` function in the interactive queries API to 
> look up a series of results, but can't with our encoding scheme due to our 
> keys being variable length.
> We use protobuf, but based on what I've read Avro, Flatbuffers and Cap'n 
> proto have similar problems.
> Currently we use the following code to work around this problem:
> {code}
> builder
> .stream("input-topic", Consumed.with(inputKeySerde, inputValueSerde))
> .to("intermediate-topic", Produced.with(intermediateKeySerde, 
> intermediateValueSerde)); 
> t1 = builder
> .table("intermediate-topic", Consumed.with(intermediateKeySerde, 
> intermediateValueSerde), t1Materialized);
> {code}
> With the encoding formats decoupled, the code above could be reduced to a 
> single step, not requiring an intermediate topic.
> Based on feedback on my [SO 
> question|https://stackoverflow.com/questions/53913571/is-there-a-way-to-separate-kafka-topic-serdes-from-materialized-serdes]
>  a change that introduces this would impact state restoration when using an 
> input topic for recovery.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8136) Flaky Test MetadataRequestTest#testAllTopicsRequest

2019-03-20 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-8136:
--

 Summary: Flaky Test MetadataRequestTest#testAllTopicsRequest
 Key: KAFKA-8136
 URL: https://issues.apache.org/jira/browse/KAFKA-8136
 Project: Kafka
  Issue Type: Bug
  Components: core, unit tests
Affects Versions: 2.2.0
Reporter: Matthias J. Sax
 Fix For: 2.3.0, 2.2.1


[https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/78/testReport/junit/kafka.server/MetadataRequestTest/testAllTopicsRequest/]
{quote}java.lang.AssertionError: Partition [t2,0] metadata not propagated after 
15000 ms at kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
scala.collection.immutable.Range.foreach(Range.scala:158) at 
scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
kafka.integration.KafkaServerTestHarness.createTopic(KafkaServerTestHarness.scala:125)
 at 
kafka.server.MetadataRequestTest.testAllTopicsRequest(MetadataRequestTest.scala:201){quote}
STDOUT
{quote}[2019-03-20 00:05:17,921] ERROR [ReplicaFetcher replicaId=1, leaderId=0, 
fetcherId=0] Error for partition replicaDown-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2019-03-20 00:05:23,520] WARN Unable to 
read additional data from client sessionid 0x10033b4d8c6, likely client has 
closed socket (org.apache.zookeeper.server.NIOServerCnxn:376) [2019-03-20 
00:05:23,794] ERROR [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error 
for partition testAutoCreate_Topic-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2019-03-20 00:05:30,735] ERROR 
[ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition 
__consumer_offsets-2 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2019-03-20 00:05:31,156] ERROR 
[ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
notInternal-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2019-03-20 00:05:31,156] ERROR 
[ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Error for partition 
notInternal-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2019-03-20 00:05:37,817] WARN Unable to 
read additional data from client sessionid 0x10033b51c370002, likely client has 
closed socket (org.apache.zookeeper.server.NIOServerCnxn:376) [2019-03-20 
00:05:51,571] ERROR [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error 
for partition t1-2 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2019-03-20 00:05:51,571] ERROR 
[ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition t1-1 
at offset 0 (kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2019-03-20 00:06:22,153] ERROR 
[ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition t1-0 
at offset 0 (kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2019-03-20 00:06:22,622] ERROR 
[ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition t2-0 
at offset 0 (kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2019-03-20 00:06:35,106] ERROR 
[ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Error for partition t1-0 
at offset 0 (kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2019-03-20 00:06:35,129] ERROR 
[ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition t1-1 
at offset 0 (kafka.server.ReplicaFetch

[jira] [Created] (KAFKA-8137) Flaky Test LegacyAdminClientTest#testOffsetsForTimesWhenOffsetNotFound

2019-03-20 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-8137:
--

 Summary: Flaky Test 
LegacyAdminClientTest#testOffsetsForTimesWhenOffsetNotFound
 Key: KAFKA-8137
 URL: https://issues.apache.org/jira/browse/KAFKA-8137
 Project: Kafka
  Issue Type: Bug
  Components: admin, unit tests
Affects Versions: 2.2.0
Reporter: Matthias J. Sax
 Fix For: 2.3.0, 2.2.1


[https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/80/testReport/junit/kafka.api/LegacyAdminClientTest/testOffsetsForTimesWhenOffsetNotFound/]
{quote}java.lang.AssertionError: Partition [topic,0] metadata not propagated 
after 15000 ms at kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
scala.collection.immutable.Range.foreach(Range.scala:158) at 
scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
kafka.integration.KafkaServerTestHarness.createTopic(KafkaServerTestHarness.scala:125)
 at kafka.api.LegacyAdminClientTest.setUp(LegacyAdminClientTest.scala:73){quote}
STDOUT
{quote}[2019-03-20 16:28:10,089] ERROR [ReplicaFetcher replicaId=1, leaderId=0, 
fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2019-03-20 16:28:10,093] ERROR 
[ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition 
__consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2019-03-20 16:28:10,303] ERROR 
[ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2019-03-20 16:28:10,303] ERROR 
[ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition 
topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2019-03-20 16:28:14,493] ERROR 
[ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
__consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2019-03-20 16:28:14,724] ERROR 
[ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
topic-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2019-03-20 16:28:21,388] ERROR 
[ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
__consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2019-03-20 16:28:21,394] ERROR 
[ReplicaFetcher replicaId=1, leaderId=2, fetcherId=0] Error for partition 
__consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2019-03-20 16:29:48,224] ERROR 
[ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Error for partition 
__consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2019-03-20 16:29:48,249] ERROR 
[ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Error for partition 
__consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2019-03-20 16:29:49,255] ERROR 
[ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Error for partition 
__consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2019-03-20 16:29:49,256] ERROR 
[ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Error for partition 
__consumer_offsets-0 at offset 0 (k

[jira] [Created] (KAFKA-8138) Flaky Test PlaintextConsumerTest#testFetchRecordLargerThanFetchMaxBytes

2019-03-20 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-8138:
--

 Summary: Flaky Test 
PlaintextConsumerTest#testFetchRecordLargerThanFetchMaxBytes
 Key: KAFKA-8138
 URL: https://issues.apache.org/jira/browse/KAFKA-8138
 Project: Kafka
  Issue Type: Bug
  Components: clients, unit tests
Affects Versions: 2.2.0
Reporter: Matthias J. Sax
 Fix For: 2.3.0, 2.2.1


[https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/80/testReport/junit/kafka.api/PlaintextConsumerTest/testFetchRecordLargerThanFetchMaxBytes/]
{quote}java.lang.AssertionError: Partition [topic,0] metadata not propagated 
after 15000 ms at kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
scala.collection.immutable.Range.foreach(Range.scala:158) at 
scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
kafka.integration.KafkaServerTestHarness.createTopic(KafkaServerTestHarness.scala:125)
 at kafka.api.BaseConsumerTest.setUp(BaseConsumerTest.scala:69){quote}
STDOUT (truncated)
{quote}[2019-03-20 16:10:19,759] ERROR [ReplicaFetcher replicaId=2, leaderId=0, 
fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2019-03-20 16:10:19,760] ERROR 
[ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
__consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2019-03-20 16:10:19,963] ERROR 
[ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
topic-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2019-03-20 16:10:19,964] ERROR 
[ReplicaFetcher replicaId=1, leaderId=2, fetcherId=0] Error for partition 
topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2019-03-20 16:10:19,975] ERROR 
[ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8139) Flaky Test SaslSslAdminClientIntegrationTest#testMetadataRefresh

2019-03-20 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-8139:
--

 Summary: Flaky Test 
SaslSslAdminClientIntegrationTest#testMetadataRefresh
 Key: KAFKA-8139
 URL: https://issues.apache.org/jira/browse/KAFKA-8139
 Project: Kafka
  Issue Type: Bug
  Components: admin, unit tests
Affects Versions: 2.2.0
Reporter: Matthias J. Sax
 Fix For: 2.3.0, 2.2.1


[https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/80/testReport/junit/kafka.api/SaslSslAdminClientIntegrationTest/testMetadataRefresh/]
{quote}org.junit.runners.model.TestTimedOutException: test timed out after 
12 milliseconds at java.lang.Object.wait(Native Method) at 
java.util.concurrent.ForkJoinTask.externalAwaitDone(ForkJoinTask.java:334) at 
java.util.concurrent.ForkJoinTask.doJoin(ForkJoinTask.java:391) at 
java.util.concurrent.ForkJoinTask.join(ForkJoinTask.java:719) at 
scala.collection.parallel.ForkJoinTasks$WrappedTask.sync(Tasks.scala:379) at 
scala.collection.parallel.ForkJoinTasks$WrappedTask.sync$(Tasks.scala:379) at 
scala.collection.parallel.AdaptiveWorkStealingForkJoinTasks$WrappedTask.sync(Tasks.scala:440)
 at 
scala.collection.parallel.ForkJoinTasks.executeAndWaitResult(Tasks.scala:423) 
at 
scala.collection.parallel.ForkJoinTasks.executeAndWaitResult$(Tasks.scala:416) 
at 
scala.collection.parallel.ForkJoinTaskSupport.executeAndWaitResult(TaskSupport.scala:60)
 at 
scala.collection.parallel.ExecutionContextTasks.executeAndWaitResult(Tasks.scala:555)
 at 
scala.collection.parallel.ExecutionContextTasks.executeAndWaitResult$(Tasks.scala:555)
 at 
scala.collection.parallel.ExecutionContextTaskSupport.executeAndWaitResult(TaskSupport.scala:84)
 at 
scala.collection.parallel.ParIterableLike.foreach(ParIterableLike.scala:465) at 
scala.collection.parallel.ParIterableLike.foreach$(ParIterableLike.scala:464) 
at scala.collection.parallel.mutable.ParArray.foreach(ParArray.scala:58) at 
kafka.utils.TestUtils$.shutdownServers(TestUtils.scala:201) at 
kafka.integration.KafkaServerTestHarness.tearDown(KafkaServerTestHarness.scala:113)
 at kafka.api.IntegrationTestHarness.tearDown(IntegrationTestHarness.scala:134) 
at 
kafka.api.AdminClientIntegrationTest.tearDown(AdminClientIntegrationTest.scala:87)
 at 
kafka.api.SaslSslAdminClientIntegrationTest.tearDown(SaslSslAdminClientIntegrationTest.scala:90){quote}
STDOUT
{quote}[2019-03-20 16:30:35,739] ERROR [KafkaServer id=0] Fatal error during 
KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer:159) 
java.lang.IllegalArgumentException: Could not find a 'KafkaServer' or 
'sasl_ssl.KafkaServer' entry in the JAAS configuration. System property 
'java.security.auth.login.config' is not set at 
org.apache.kafka.common.security.JaasContext.defaultContext(JaasContext.java:133)
 at org.apache.kafka.common.security.JaasContext.load(JaasContext.java:98) at 
org.apache.kafka.common.security.JaasContext.loadServerContext(JaasContext.java:70)
 at 
org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:121)
 at 
org.apache.kafka.common.network.ChannelBuilders.serverChannelBuilder(ChannelBuilders.java:85)
 at kafka.network.Processor.(SocketServer.scala:694) at 
kafka.network.SocketServer.newProcessor(SocketServer.scala:344) at 
kafka.network.SocketServer.$anonfun$addDataPlaneProcessors$1(SocketServer.scala:253)
 at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:158) at 
kafka.network.SocketServer.addDataPlaneProcessors(SocketServer.scala:252) at 
kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1(SocketServer.scala:216)
 at 
kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1$adapted(SocketServer.scala:214)
 at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) at 
scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) at 
scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at 
kafka.network.SocketServer.createDataPlaneAcceptorsAndProcessors(SocketServer.scala:214)
 at kafka.network.SocketServer.startup(SocketServer.scala:114) at 
kafka.server.KafkaServer.startup(KafkaServer.scala:253) at 
kafka.utils.TestUtils$.createServer(TestUtils.scala:140) at 
kafka.integration.KafkaServerTestHarness.$anonfun$setUp$1(KafkaServerTestHarness.scala:101)
 at scala.collection.Iterator.foreach(Iterator.scala:941) at 
scala.collection.Iterator.foreach$(Iterator.scala:941) at 
scala.collection.AbstractIterator.foreach(Iterator.scala:1429) at 
scala.collection.IterableLike.foreach(IterableLike.scala:74) at 
scala.collection.IterableLike.foreach$(IterableLike.scala:73) at 
scala.collection.AbstractIterable.foreach(Iterable.scala:56) at 
kafka.integration.KafkaServerTestHarness.setUp(KafkaServerTestHarness.scala:100)
 at kafka.api.IntegrationTestHarness.doSetup(IntegrationTestHarness.scala:81) 
at kafka.api.IntegrationTestHarness.setUp(IntegrationTestHarness.scala:73

[jira] [Created] (KAFKA-8140) Flaky Test SaslSslAdminClientIntegrationTest#testDescribeAndAlterConfigs

2019-03-20 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-8140:
--

 Summary: Flaky Test 
SaslSslAdminClientIntegrationTest#testDescribeAndAlterConfigs
 Key: KAFKA-8140
 URL: https://issues.apache.org/jira/browse/KAFKA-8140
 Project: Kafka
  Issue Type: Bug
  Components: admin, unit tests
Affects Versions: 2.2.0
Reporter: Matthias J. Sax
 Fix For: 2.3.0, 2.2.1


[https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/80/testReport/junit/kafka.api/SaslSslAdminClientIntegrationTest/testDescribeAndAlterConfigs/]
{quote}java.lang.IllegalArgumentException: Could not find a 'KafkaServer' or 
'sasl_ssl.KafkaServer' entry in the JAAS configuration. System property 
'java.security.auth.login.config' is not set at 
org.apache.kafka.common.security.JaasContext.defaultContext(JaasContext.java:133)
 at org.apache.kafka.common.security.JaasContext.load(JaasContext.java:98) at 
org.apache.kafka.common.security.JaasContext.loadServerContext(JaasContext.java:70)
 at 
org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:121)
 at 
org.apache.kafka.common.network.ChannelBuilders.serverChannelBuilder(ChannelBuilders.java:85)
 at kafka.network.Processor.(SocketServer.scala:694) at 
kafka.network.SocketServer.newProcessor(SocketServer.scala:344) at 
kafka.network.SocketServer.$anonfun$addDataPlaneProcessors$1(SocketServer.scala:253)
 at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:158) at 
kafka.network.SocketServer.addDataPlaneProcessors(SocketServer.scala:252) at 
kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1(SocketServer.scala:216)
 at 
kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1$adapted(SocketServer.scala:214)
 at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) at 
scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) at 
scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at 
kafka.network.SocketServer.createDataPlaneAcceptorsAndProcessors(SocketServer.scala:214)
 at kafka.network.SocketServer.startup(SocketServer.scala:114) at 
kafka.server.KafkaServer.startup(KafkaServer.scala:253) at 
kafka.utils.TestUtils$.createServer(TestUtils.scala:140) at 
kafka.integration.KafkaServerTestHarness.$anonfun$setUp$1(KafkaServerTestHarness.scala:101)
 at scala.collection.Iterator.foreach(Iterator.scala:941) at 
scala.collection.Iterator.foreach$(Iterator.scala:941) at 
scala.collection.AbstractIterator.foreach(Iterator.scala:1429) at 
scala.collection.IterableLike.foreach(IterableLike.scala:74) at 
scala.collection.IterableLike.foreach$(IterableLike.scala:73) at 
scala.collection.AbstractIterable.foreach(Iterable.scala:56) at 
kafka.integration.KafkaServerTestHarness.setUp(KafkaServerTestHarness.scala:100)
 at kafka.api.IntegrationTestHarness.doSetup(IntegrationTestHarness.scala:81) 
at kafka.api.IntegrationTestHarness.setUp(IntegrationTestHarness.scala:73) at 
kafka.api.AdminClientIntegrationTest.setUp(AdminClientIntegrationTest.scala:79) 
at 
kafka.api.SaslSslAdminClientIntegrationTest.setUp(SaslSslAdminClientIntegrationTest.scala:64){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7989) Flaky Test RequestQuotaTest#testResponseThrottleTimeWhenBothFetchAndRequestQuotasViolated

2019-03-20 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16797726#comment-16797726
 ] 

Matthias J. Sax commented on KAFKA-7989:


One more: 
[https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/80/testReport/junit/kafka.server/RequestQuotaTest/testResponseThrottleTimeWhenBothFetchAndRequestQuotasViolated/]

> Flaky Test 
> RequestQuotaTest#testResponseThrottleTimeWhenBothFetchAndRequestQuotasViolated
> -
>
> Key: KAFKA-7989
> URL: https://issues.apache.org/jira/browse/KAFKA-7989
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Assignee: Anna Povzner
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.1
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/27/]
> {quote}java.util.concurrent.ExecutionException: java.lang.AssertionError: 
> Throttle time metrics for consumer quota not updated: 
> small-quota-consumer-client at 
> java.util.concurrent.FutureTask.report(FutureTask.java:122) at 
> java.util.concurrent.FutureTask.get(FutureTask.java:206) at 
> kafka.server.RequestQuotaTest.$anonfun$waitAndCheckResults$1(RequestQuotaTest.scala:415)
>  at scala.collection.immutable.List.foreach(List.scala:392) at 
> scala.collection.generic.TraversableForwarder.foreach(TraversableForwarder.scala:38)
>  at 
> scala.collection.generic.TraversableForwarder.foreach$(TraversableForwarder.scala:38)
>  at scala.collection.mutable.ListBuffer.foreach(ListBuffer.scala:47) at 
> kafka.server.RequestQuotaTest.waitAndCheckResults(RequestQuotaTest.scala:413) 
> at 
> kafka.server.RequestQuotaTest.testResponseThrottleTimeWhenBothFetchAndRequestQuotasViolated(RequestQuotaTest.scala:134){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8141) Flaky Test FetchRequestDownConversionConfigTest#testV1FetchWithDownConversionDisabled

2019-03-20 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-8141:
--

 Summary: Flaky Test 
FetchRequestDownConversionConfigTest#testV1FetchWithDownConversionDisabled
 Key: KAFKA-8141
 URL: https://issues.apache.org/jira/browse/KAFKA-8141
 Project: Kafka
  Issue Type: Bug
  Components: core, unit tests
Affects Versions: 2.2.0
Reporter: Matthias J. Sax
 Fix For: 2.3.0, 2.2.1


[https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/80/testReport/junit/kafka.server/FetchRequestDownConversionConfigTest/testV1FetchWithDownConversionDisabled/]
{quote}java.lang.AssertionError: Partition [__consumer_offsets,0] metadata not 
propagated after 15000 ms at kafka.utils.TestUtils$.fail(TestUtils.scala:381) 
at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
scala.collection.immutable.Range.foreach(Range.scala:158) at 
scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
kafka.utils.TestUtils$.createOffsetsTopic(TestUtils.scala:375) at 
kafka.api.IntegrationTestHarness.doSetup(IntegrationTestHarness.scala:95) at 
kafka.api.IntegrationTestHarness.setUp(IntegrationTestHarness.scala:73){quote}
 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-8142) Kafka Streams fails with NPE if records contains null-value in header

2019-03-21 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax reassigned KAFKA-8142:
--

Assignee: Matthias J. Sax

> Kafka Streams fails with NPE if records contains null-value in header
> -
>
> Key: KAFKA-8142
> URL: https://issues.apache.org/jira/browse/KAFKA-8142
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.1
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>Priority: Major
>
> [2019-03-14 13:14:49,756] ERROR stream-thread 
> [-2-1-1f30b4e6-1204-4aa7-9426-a395ab06ad64-StreamThread-2] 
> Failed to process stream task 9_2 due to the following error: 
> (org.apache.kafka.str
> eams.processor.internals.AssignedStreamsTasks)
> java.lang.NullPointerException
> at 
> org.apache.kafka.streams.processor.internals.ProcessorRecordContext.sizeBytes(ProcessorRecordContext.java:93)
> at 
> org.apache.kafka.streams.state.internals.ContextualRecord.sizeBytes(ContextualRecord.java:42)
> at 
> org.apache.kafka.streams.state.internals.LRUCacheEntry.(LRUCacheEntry.java:53)
> at 
> org.apache.kafka.streams.state.internals.CachingWindowStore.put(CachingWindowStore.java:160)
> at 
> org.apache.kafka.streams.state.internals.CachingWindowStore.put(CachingWindowStore.java:36)
> at 
> org.apache.kafka.streams.state.internals.MeteredWindowStore.put(MeteredWindowStore.java:114)
> at 
> org.apache.kafka.streams.kstream.internals.KStreamWindowAggregate$KStreamWindowAggregateProcessor.process(KStreamWindowAggregate.java:124)
> at 
> org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:117)
> at 
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:146)
> at 
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:129)
> at 
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:93)
> at 
> org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:84)
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:351)
> at 
> org.apache.kafka.streams.processor.internals.AssignedStreamsTasks.process(AssignedStreamsTasks.java:104)
> at 
> org.apache.kafka.streams.processor.internals.TaskManager.process(TaskManager.java:413)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:862)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:777)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:747)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8142) Kafka Streams fails with NPE if records contains null-value in header

2019-03-21 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-8142:
--

 Summary: Kafka Streams fails with NPE if records contains 
null-value in header
 Key: KAFKA-8142
 URL: https://issues.apache.org/jira/browse/KAFKA-8142
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 2.1.1
Reporter: Matthias J. Sax


[2019-03-14 13:14:49,756] ERROR stream-thread 
[-2-1-1f30b4e6-1204-4aa7-9426-a395ab06ad64-StreamThread-2] Failed 
to process stream task 9_2 due to the following error: (org.apache.kafka.str
eams.processor.internals.AssignedStreamsTasks)
java.lang.NullPointerException
at 
org.apache.kafka.streams.processor.internals.ProcessorRecordContext.sizeBytes(ProcessorRecordContext.java:93)
at 
org.apache.kafka.streams.state.internals.ContextualRecord.sizeBytes(ContextualRecord.java:42)
at 
org.apache.kafka.streams.state.internals.LRUCacheEntry.(LRUCacheEntry.java:53)
at 
org.apache.kafka.streams.state.internals.CachingWindowStore.put(CachingWindowStore.java:160)
at 
org.apache.kafka.streams.state.internals.CachingWindowStore.put(CachingWindowStore.java:36)
at 
org.apache.kafka.streams.state.internals.MeteredWindowStore.put(MeteredWindowStore.java:114)
at 
org.apache.kafka.streams.kstream.internals.KStreamWindowAggregate$KStreamWindowAggregateProcessor.process(KStreamWindowAggregate.java:124)
at 
org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:117)
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:146)
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:129)
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:93)
at 
org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:84)
at 
org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:351)
at 
org.apache.kafka.streams.processor.internals.AssignedStreamsTasks.process(AssignedStreamsTasks.java:104)
at 
org.apache.kafka.streams.processor.internals.TaskManager.process(TaskManager.java:413)
at 
org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:862)
at 
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:777)
at 
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:747)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8142) Kafka Streams fails with NPE if record contains null-value in header

2019-03-21 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-8142:
---
Summary: Kafka Streams fails with NPE if record contains null-value in 
header  (was: Kafka Streams fails with NPE if records contains null-value in 
header)

> Kafka Streams fails with NPE if record contains null-value in header
> 
>
> Key: KAFKA-8142
> URL: https://issues.apache.org/jira/browse/KAFKA-8142
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.1
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>Priority: Major
>
> [2019-03-14 13:14:49,756] ERROR stream-thread 
> [-2-1-1f30b4e6-1204-4aa7-9426-a395ab06ad64-StreamThread-2] 
> Failed to process stream task 9_2 due to the following error: 
> (org.apache.kafka.str
> eams.processor.internals.AssignedStreamsTasks)
> java.lang.NullPointerException
> at 
> org.apache.kafka.streams.processor.internals.ProcessorRecordContext.sizeBytes(ProcessorRecordContext.java:93)
> at 
> org.apache.kafka.streams.state.internals.ContextualRecord.sizeBytes(ContextualRecord.java:42)
> at 
> org.apache.kafka.streams.state.internals.LRUCacheEntry.(LRUCacheEntry.java:53)
> at 
> org.apache.kafka.streams.state.internals.CachingWindowStore.put(CachingWindowStore.java:160)
> at 
> org.apache.kafka.streams.state.internals.CachingWindowStore.put(CachingWindowStore.java:36)
> at 
> org.apache.kafka.streams.state.internals.MeteredWindowStore.put(MeteredWindowStore.java:114)
> at 
> org.apache.kafka.streams.kstream.internals.KStreamWindowAggregate$KStreamWindowAggregateProcessor.process(KStreamWindowAggregate.java:124)
> at 
> org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:117)
> at 
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:146)
> at 
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:129)
> at 
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:93)
> at 
> org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:84)
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:351)
> at 
> org.apache.kafka.streams.processor.internals.AssignedStreamsTasks.process(AssignedStreamsTasks.java:104)
> at 
> org.apache.kafka.streams.processor.internals.TaskManager.process(TaskManager.java:413)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:862)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:777)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:747)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8042) Kafka Streams creates many segment stores on state restore

2019-03-21 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16798432#comment-16798432
 ] 

Matthias J. Sax commented on KAFKA-8042:


Is this different to KAFKA-7934 or a duplicate?

> Kafka Streams creates many segment stores on state restore
> --
>
> Key: KAFKA-8042
> URL: https://issues.apache.org/jira/browse/KAFKA-8042
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.0, 2.1.1
>Reporter: Adrian McCague
>Priority: Major
> Attachments: StateStoreSegments-StreamsConfig.txt
>
>
> Note that this is from the perspective of one instance of an application, 
> where there are 8 instances total, with partition count 8 for all topics and 
> of course stores. Standby replicas = 1.
> In the process there are multiple instances of {{KafkaStreams}} so the below 
> detail is from one of these.
> h2. Actual Behaviour
> During state restore of an application, many segment stores are created (I am 
> using MANIFEST files as a marker since they preallocate 4MB each). As can be 
> seen this topology has 5 joins - which is the extent of its state.
> {code:java}
> bash-4.2# pwd
> /data/fooapp/0_7
> bash-4.2# for dir in $(find . -maxdepth 1 -type d); do echo "${dir}: $(find 
> ${dir} -type f -name 'MANIFEST-*' -printf x | wc -c)"; done
> .: 8058
> ./KSTREAM-JOINOTHER-25-store: 851
> ./KSTREAM-JOINOTHER-40-store: 819
> ./KSTREAM-JOINTHIS-24-store: 851
> ./KSTREAM-JOINTHIS-29-store: 836
> ./KSTREAM-JOINOTHER-35-store: 819
> ./KSTREAM-JOINOTHER-30-store: 819
> ./KSTREAM-JOINOTHER-45-store: 745
> ./KSTREAM-JOINTHIS-39-store: 819
> ./KSTREAM-JOINTHIS-44-store: 685
> ./KSTREAM-JOINTHIS-34-store: 819
> There are many (x800 as above) of these segment files:
> ./KSTREAM-JOINOTHER-25-store.155146629
> ./KSTREAM-JOINOTHER-25-store.155155902
> ./KSTREAM-JOINOTHER-25-store.155149269
> ./KSTREAM-JOINOTHER-25-store.155154879
> ./KSTREAM-JOINOTHER-25-store.155169861
> ./KSTREAM-JOINOTHER-25-store.155153064
> ./KSTREAM-JOINOTHER-25-store.155148444
> ./KSTREAM-JOINOTHER-25-store.155155671
> ./KSTREAM-JOINOTHER-25-store.155168673
> ./KSTREAM-JOINOTHER-25-store.155159565
> ./KSTREAM-JOINOTHER-25-store.155175735
> ./KSTREAM-JOINOTHER-25-store.155168574
> ./KSTREAM-JOINOTHER-25-store.155163525
> ./KSTREAM-JOINOTHER-25-store.155165241
> ./KSTREAM-JOINOTHER-25-store.155146662
> ./KSTREAM-JOINOTHER-25-store.155178177
> ./KSTREAM-JOINOTHER-25-store.155158740
> ./KSTREAM-JOINOTHER-25-store.155168145
> ./KSTREAM-JOINOTHER-25-store.155166231
> ./KSTREAM-JOINOTHER-25-store.155172171
> ./KSTREAM-JOINOTHER-25-store.155175075
> ./KSTREAM-JOINOTHER-25-store.155163096
> ./KSTREAM-JOINOTHER-25-store.155161512
> ./KSTREAM-JOINOTHER-25-store.155179233
> ./KSTREAM-JOINOTHER-25-store.155146266
> ./KSTREAM-JOINOTHER-25-store.155153691
> ./KSTREAM-JOINOTHER-25-store.155159235
> ./KSTREAM-JOINOTHER-25-store.155152734
> ./KSTREAM-JOINOTHER-25-store.155160687
> ./KSTREAM-JOINOTHER-25-store.155174415
> ./KSTREAM-JOINOTHER-25-store.155150820
> ./KSTREAM-JOINOTHER-25-store.155148642
> ... etc
> {code}
> Once re-balancing and state restoration is complete - the redundant segment 
> files are deleted and the segment count drops to 508 total (where the above 
> mentioned state directory is one of many).
> We have seen the number of these segment stores grow to as many as 15000 over 
> the baseline 508 which can fill smaller volumes. *This means that a state 
> volume that would normally have ~300MB total disk usage would use in excess 
> of 30GB during rebalancing*, mostly preallocated MANIFEST files.
> h2. Expected Behaviour
> For this particular application we expect 508 segment folders total to be 
> active and existing throughout rebalancing. Give or take migrated tasks that 
> are subject to the {{state.cleanup.delay.ms}}.
> h2. Preliminary investigation
> * This does not appear to be the case in v1.1.0. With our application the 
> number of state directories only grows to 670 (over the base line 508)
> * The MANIFEST files were not preallocated to 4MB in v1.1.0 they are now in 
> v2.1.x, this appears to be expected RocksDB behaviour, but exacerbates the 
> many segment stores.
> * Suspect https://github.com/apache/kafka/pull/5253 to be the source of this 
> change of behaviour.
> A workaround is to use {{rocksdb

[jira] [Commented] (KAFKA-7996) KafkaStreams does not pass timeout when closing Producer

2019-03-21 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16798528#comment-16798528
 ] 

Matthias J. Sax commented on KAFKA-7996:


How would this config and `KafakStreams#close(timeout)` relate to each other? 
Not saying we should not add a config, just want to understand it better. What 
is the advantage compared to alternatives? Could we use 
`KafkaStreams#close(timeout)` and pass it to the internal clients instead 
(advantages/disadvantages)? Maybe those question could also be part of a KIP 
discussion. I think, it still don't fully understand the tradeoffs yet for 
different approaches.

> KafkaStreams does not pass timeout when closing Producer
> 
>
> Key: KAFKA-7996
> URL: https://issues.apache.org/jira/browse/KAFKA-7996
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.0
>Reporter: Patrik Kleindl
>Assignee: Lee Dongjin
>Priority: Major
>  Labels: needs-kip
>
> [https://confluentcommunity.slack.com/messages/C48AHTCUQ/convo/C48AHTCUQ-1550831721.026100/]
> We are running 2.1 and have a case where the shutdown of a streams 
> application takes several minutes
> I noticed that although we call streams.close with a timeout of 30 seconds 
> the log says
> [Producer 
> clientId=…-8be49feb-8a2e-4088-bdd7-3c197f6107bb-StreamThread-1-producer] 
> Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms.
> Matthias J Sax [vor 3 Tagen]
> I just checked the code, and yes, we don't provide a timeout for the producer 
> on close()...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7965) Flaky Test ConsumerBounceTest#testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup

2019-03-21 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16798572#comment-16798572
 ] 

Matthias J. Sax commented on KAFKA-7965:


[https://builds.apache.org/blue/organizations/jenkins/kafka-2.2-jdk8/detail/kafka-2.2-jdk8/69/tests]

> Flaky Test 
> ConsumerBounceTest#testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup
> 
>
> Key: KAFKA-7965
> URL: https://issues.apache.org/jira/browse/KAFKA-7965
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer, unit tests
>Affects Versions: 2.2.0, 2.3.0
>Reporter: Matthias J. Sax
>Assignee: Stanislav Kozlovski
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.1
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/21/]
> {quote}java.lang.AssertionError: Received 0, expected at least 68 at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.assertTrue(Assert.java:41) at 
> kafka.api.ConsumerBounceTest.receiveAndCommit(ConsumerBounceTest.scala:557) 
> at 
> kafka.api.ConsumerBounceTest.$anonfun$testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup$1(ConsumerBounceTest.scala:320)
>  at 
> kafka.api.ConsumerBounceTest.$anonfun$testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup$1$adapted(ConsumerBounceTest.scala:319)
>  at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) 
> at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) 
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at 
> kafka.api.ConsumerBounceTest.testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup(ConsumerBounceTest.scala:319){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7965) Flaky Test ConsumerBounceTest#testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup

2019-03-21 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16798577#comment-16798577
 ] 

Matthias J. Sax commented on KAFKA-7965:


[https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk11/detail/kafka-trunk-jdk11/392/tests]

> Flaky Test 
> ConsumerBounceTest#testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup
> 
>
> Key: KAFKA-7965
> URL: https://issues.apache.org/jira/browse/KAFKA-7965
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer, unit tests
>Affects Versions: 2.2.0, 2.3.0
>Reporter: Matthias J. Sax
>Assignee: Stanislav Kozlovski
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.1
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/21/]
> {quote}java.lang.AssertionError: Received 0, expected at least 68 at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.assertTrue(Assert.java:41) at 
> kafka.api.ConsumerBounceTest.receiveAndCommit(ConsumerBounceTest.scala:557) 
> at 
> kafka.api.ConsumerBounceTest.$anonfun$testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup$1(ConsumerBounceTest.scala:320)
>  at 
> kafka.api.ConsumerBounceTest.$anonfun$testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup$1$adapted(ConsumerBounceTest.scala:319)
>  at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) 
> at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) 
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at 
> kafka.api.ConsumerBounceTest.testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup(ConsumerBounceTest.scala:319){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-8143) Kafka-Streams GlobalStore cannot be read after application restart

2019-03-21 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-8143.

Resolution: Duplicate

Thanks for reporting this [~lukestephenson]! Closing this as duplicated. It's 
not a bug, but a limitation in the design.

The processor of a global store, is supposed to only load the data unmodified. 
Actually Processing is not supported atm.

> Kafka-Streams GlobalStore cannot be read after application restart
> --
>
> Key: KAFKA-8143
> URL: https://issues.apache.org/jira/browse/KAFKA-8143
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.1
>Reporter: Luke Stephenson
>Priority: Major
>
> I've created a small example application which has a trivial `Processor` 
> which takes messages and stores the length of the String value rather than 
> the value itself.
> That is, the following setup:
> {code:java}
> Topic[String, String]
> Processor[String, String]
> KeyValueStore[String, Long] // Note the Store persists Long values
> {code}
>  
> The example application also has a Thread which periodically displays all 
> values in the KeyValueStore.
> While the application is run, I can publish values to the topic with:
> {code:java}
> root@kafka:/opt/kafka# bin/kafka-console-producer.sh --property 
> "parse.key=true" --property "key.separator=:" --broker-list localhost:9092 
> --topic test.topic
> >1:hello
> >2:abc{code}
> And the background Thread will report the values persisted to the key value 
> store.
> If the application is restarted, when attempting to read from the 
> KeyValueStore it will fail.  It attempts to recover the state from the 
> persistent RocksDB store which fails with:
> {code:java}
> org.apache.kafka.common.errors.SerializationException: Size of data received 
> by LongDeserializer is not 8{code}
> (Note there is no stack trace as SerializationException has disabled it.)
> Debugging appears to reveal that the original data from the Topic is being 
> restored rather than what was modified by the processor.
> I've created a minimal example to show the issue at 
> [https://github.com/lukestephenson/kafka-streams-example]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8042) Kafka Streams creates many segment stores on state restore

2019-03-21 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16798585#comment-16798585
 ] 

Matthias J. Sax commented on KAFKA-8042:


[~ableegoldman] From my understanding, the fast forwarding you mention is only 
done for a batch or records. If there is a larger backlog to restore, this 
would only provide a small lookahead and maybe skip over creating some 
segments. However, it believe that we need KAFKA-7934 for a  proper fix to do a 
"full look ahead" that allows us to not create any old segments at all.

I am just wondering if this ticket is a duplicate of KAFKA-7934? It would be 
great if [~amccague] could confirm this. If it's not a duplication, we should 
document the difference explicitly.

> Kafka Streams creates many segment stores on state restore
> --
>
> Key: KAFKA-8042
> URL: https://issues.apache.org/jira/browse/KAFKA-8042
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.0, 2.1.1
>Reporter: Adrian McCague
>Priority: Major
> Attachments: StateStoreSegments-StreamsConfig.txt
>
>
> Note that this is from the perspective of one instance of an application, 
> where there are 8 instances total, with partition count 8 for all topics and 
> of course stores. Standby replicas = 1.
> In the process there are multiple instances of {{KafkaStreams}} so the below 
> detail is from one of these.
> h2. Actual Behaviour
> During state restore of an application, many segment stores are created (I am 
> using MANIFEST files as a marker since they preallocate 4MB each). As can be 
> seen this topology has 5 joins - which is the extent of its state.
> {code:java}
> bash-4.2# pwd
> /data/fooapp/0_7
> bash-4.2# for dir in $(find . -maxdepth 1 -type d); do echo "${dir}: $(find 
> ${dir} -type f -name 'MANIFEST-*' -printf x | wc -c)"; done
> .: 8058
> ./KSTREAM-JOINOTHER-25-store: 851
> ./KSTREAM-JOINOTHER-40-store: 819
> ./KSTREAM-JOINTHIS-24-store: 851
> ./KSTREAM-JOINTHIS-29-store: 836
> ./KSTREAM-JOINOTHER-35-store: 819
> ./KSTREAM-JOINOTHER-30-store: 819
> ./KSTREAM-JOINOTHER-45-store: 745
> ./KSTREAM-JOINTHIS-39-store: 819
> ./KSTREAM-JOINTHIS-44-store: 685
> ./KSTREAM-JOINTHIS-34-store: 819
> There are many (x800 as above) of these segment files:
> ./KSTREAM-JOINOTHER-25-store.155146629
> ./KSTREAM-JOINOTHER-25-store.155155902
> ./KSTREAM-JOINOTHER-25-store.155149269
> ./KSTREAM-JOINOTHER-25-store.155154879
> ./KSTREAM-JOINOTHER-25-store.155169861
> ./KSTREAM-JOINOTHER-25-store.155153064
> ./KSTREAM-JOINOTHER-25-store.155148444
> ./KSTREAM-JOINOTHER-25-store.155155671
> ./KSTREAM-JOINOTHER-25-store.155168673
> ./KSTREAM-JOINOTHER-25-store.155159565
> ./KSTREAM-JOINOTHER-25-store.155175735
> ./KSTREAM-JOINOTHER-25-store.155168574
> ./KSTREAM-JOINOTHER-25-store.155163525
> ./KSTREAM-JOINOTHER-25-store.155165241
> ./KSTREAM-JOINOTHER-25-store.155146662
> ./KSTREAM-JOINOTHER-25-store.155178177
> ./KSTREAM-JOINOTHER-25-store.155158740
> ./KSTREAM-JOINOTHER-25-store.155168145
> ./KSTREAM-JOINOTHER-25-store.155166231
> ./KSTREAM-JOINOTHER-25-store.155172171
> ./KSTREAM-JOINOTHER-25-store.155175075
> ./KSTREAM-JOINOTHER-25-store.155163096
> ./KSTREAM-JOINOTHER-25-store.155161512
> ./KSTREAM-JOINOTHER-25-store.155179233
> ./KSTREAM-JOINOTHER-25-store.155146266
> ./KSTREAM-JOINOTHER-25-store.155153691
> ./KSTREAM-JOINOTHER-25-store.155159235
> ./KSTREAM-JOINOTHER-25-store.155152734
> ./KSTREAM-JOINOTHER-25-store.155160687
> ./KSTREAM-JOINOTHER-25-store.155174415
> ./KSTREAM-JOINOTHER-25-store.155150820
> ./KSTREAM-JOINOTHER-25-store.155148642
> ... etc
> {code}
> Once re-balancing and state restoration is complete - the redundant segment 
> files are deleted and the segment count drops to 508 total (where the above 
> mentioned state directory is one of many).
> We have seen the number of these segment stores grow to as many as 15000 over 
> the baseline 508 which can fill smaller volumes. *This means that a state 
> volume that would normally have ~300MB total disk usage would use in excess 
> of 30GB during rebalancing*, mostly preallocated MANIFEST files.
> h2. Expected Behaviour
> For this particular application we expect 508 segment folders total to be 
> active and existing throughout rebalancing. Give or take migrated tasks that 
> are subject to the 

[jira] [Commented] (KAFKA-7647) Flaky test LogCleanerParameterizedIntegrationTest.testCleansCombinedCompactAndDeleteTopic

2019-03-21 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16798598#comment-16798598
 ] 

Matthias J. Sax commented on KAFKA-7647:


Failed again: [https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/3402/]

> Flaky test 
> LogCleanerParameterizedIntegrationTest.testCleansCombinedCompactAndDeleteTopic
> -
>
> Key: KAFKA-7647
> URL: https://issues.apache.org/jira/browse/KAFKA-7647
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core, unit tests
>Affects Versions: 2.3.0
>Reporter: Dong Lin
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> {code}
> kafka.log.LogCleanerParameterizedIntegrationTest >
> testCleansCombinedCompactAndDeleteTopic[3] FAILED
>     java.lang.AssertionError: Contents of the map shouldn't change
> expected: (340,340), 5 -> (345,345), 10 -> (350,350), 14 ->
> (354,354), 1 -> (341,341), 6 -> (346,346), 9 -> (349,349), 13 -> (353,353),
> 2 -> (342,342), 17 -> (357,357), 12 -> (352,352), 7 -> (347,347), 3 ->
> (343,343), 18 -> (358,358), 16 -> (356,356), 11 -> (351,351), 8 ->
> (348,348), 19 -> (359,359), 4 -> (344,344), 15 -> (355,355))> but
> was: (340,340), 5 -> (345,345), 10 -> (350,350), 14 -> (354,354),
> 1 -> (341,341), 6 -> (346,346), 9 -> (349,349), 13 -> (353,353), 2 ->
> (342,342), 17 -> (357,357), 12 -> (352,352), 7 -> (347,347), 3 ->
> (343,343), 18 -> (358,358), 16 -> (356,356), 11 -> (351,351), 99 ->
> (299,299), 8 -> (348,348), 19 -> (359,359), 4 -> (344,344), 15 ->
> (355,355))>
>         at org.junit.Assert.fail(Assert.java:88)
>         at org.junit.Assert.failNotEquals(Assert.java:834)
>         at org.junit.Assert.assertEquals(Assert.java:118)
>         at
> kafka.log.LogCleanerParameterizedIntegrationTest.testCleansCombinedCompactAndDeleteTopic(LogCleanerParameterizedIntegrationTest.scala:129)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8144) Flaky Test ControllerIntegrationTest#testMetadataPropagationOnControlPlane

2019-03-21 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-8144:
--

 Summary: Flaky Test 
ControllerIntegrationTest#testMetadataPropagationOnControlPlane
 Key: KAFKA-8144
 URL: https://issues.apache.org/jira/browse/KAFKA-8144
 Project: Kafka
  Issue Type: Bug
  Components: core, unit tests
Affects Versions: 2.3.0
Reporter: Matthias J. Sax
 Fix For: 2.3.0


[https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/3485/tests]
{quote}java.lang.AssertionError: expected:<1.0> but was:<0.0>
at org.junit.Assert.fail(Assert.java:89)
at org.junit.Assert.failNotEquals(Assert.java:835)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:685)
at 
kafka.controller.ControllerIntegrationTest.testMetadataPropagationOnControlPlane(ControllerIntegrationTest.scala:105){quote}
STDOUT
{quote}[2019-03-22 02:42:56,725] ERROR [ReplicaFetcher replicaId=1, leaderId=0, 
fetcherId=0] Error for partition t-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-22 02:43:00,875] ERROR [ReplicaFetcher replicaId=2, leaderId=0, 
fetcherId=0] Error for partition test-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-22 02:43:00,876] ERROR [ReplicaFetcher replicaId=1, leaderId=0, 
fetcherId=0] Error for partition test-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-22 02:43:25,090] ERROR [ReplicaFetcher replicaId=0, leaderId=1, 
fetcherId=0] Error for partition t-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-22 02:43:32,102] ERROR [ReplicaFetcher replicaId=0, leaderId=1, 
fetcherId=0] Error for partition t-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-22 02:43:34,073] ERROR [Controller id=0] Error completing preferred 
replica leader election for partition t-0 (kafka.controller.KafkaController:76)
kafka.common.StateChangeFailedException: Failed to elect leader for partition 
t-0 under strategy PreferredReplicaPartitionLeaderElectionStrategy
at 
kafka.controller.PartitionStateMachine.$anonfun$doElectLeaderForPartitions$9(PartitionStateMachine.scala:390)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at 
kafka.controller.PartitionStateMachine.doElectLeaderForPartitions(PartitionStateMachine.scala:388)
at 
kafka.controller.PartitionStateMachine.electLeaderForPartitions(PartitionStateMachine.scala:315)
at 
kafka.controller.PartitionStateMachine.doHandleStateChanges(PartitionStateMachine.scala:225)
at 
kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:141)
at 
kafka.controller.KafkaController.kafka$controller$KafkaController$$onPreferredReplicaElection(KafkaController.scala:649)
at 
kafka.controller.KafkaController$PreferredReplicaLeaderElection.handleProcess(KafkaController.scala:1597)
at 
kafka.controller.PreemptableControllerEvent.process(KafkaController.scala:1809)
at 
kafka.controller.PreemptableControllerEvent.process$(KafkaController.scala:1807)
at 
kafka.controller.KafkaController$PreferredReplicaLeaderElection.process(KafkaController.scala:1551)
at 
kafka.controller.ControllerEventManager$ControllerEventThread.$anonfun$doWork$1(ControllerEventManager.scala:95)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:31)
at 
kafka.controller.ControllerEventManager$ControllerEventThread.doWork(ControllerEventManager.scala:95)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:89)
[2019-03-22 02:43:41,232] ERROR [ReplicaFetcher replicaId=0, leaderId=1, 
fetcherId=0] Error for partition t-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-22 02:43:53,465] ERROR [ReplicaStateMachine controllerId=0] Controller 
moved to another broker when moving some replicas to OfflineReplica state 
(kafka.controller.ReplicaStateMachine:76)
org.apache.kafka.common.errors.ControllerMovedException: Controller epoch 
zkVersion check fails. Expected zkVersion = 1
[2019-03-22 02:43:53,467] INFO [ControllerEventThread controllerId=0] 
Control

[jira] [Assigned] (KAFKA-8078) Flaky Test TableTableJoinIntegrationTest#testInnerInner

2019-03-22 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax reassigned KAFKA-8078:
--

Assignee: (was: Matthias J. Sax)

> Flaky Test TableTableJoinIntegrationTest#testInnerInner
> ---
>
> Key: KAFKA-8078
> URL: https://issues.apache.org/jira/browse/KAFKA-8078
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/3445/tests]
> {quote}java.lang.AssertionError: Condition not met within timeout 15000. 
> Never received expected final result.
> at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:365)
> at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:325)
> at 
> org.apache.kafka.streams.integration.AbstractJoinIntegrationTest.runTest(AbstractJoinIntegrationTest.java:246)
> at 
> org.apache.kafka.streams.integration.TableTableJoinIntegrationTest.testInnerInner(TableTableJoinIntegrationTest.java:196){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-7991) Add StreamsUpgradeTest for 2.2 release

2019-03-25 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax reassigned KAFKA-7991:
--

Assignee: John Roesler

> Add StreamsUpgradeTest for 2.2 release
> --
>
> Key: KAFKA-7991
> URL: https://issues.apache.org/jira/browse/KAFKA-7991
> Project: Kafka
>  Issue Type: Task
>  Components: streams, system tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Assignee: John Roesler
>Priority: Blocker
> Fix For: 2.3.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8157) Missing "key.serializer" exception when setting "segment index bytes"

2019-03-26 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-8157:
---
Labels: beginner newbie  (was: features)

> Missing "key.serializer" exception when setting "segment index bytes"
> -
>
> Key: KAFKA-8157
> URL: https://issues.apache.org/jira/browse/KAFKA-8157
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.2.0
> Environment: ubuntu 18.10, localhost and Aiven too
>Reporter: Cristian D
>Priority: Major
>  Labels: beginner, newbie
>
> As a `kafka-streams` user,
> When I set the "segment index bytes" property
> Then I would like to have internal topics with the specified allocated disk 
> space
>  
> At the moment, when setting the "topic.segment.index.bytes" property, the 
> application is exiting with following exception: 
> {code:java}
> Exception in thread "main" org.apache.kafka.common.config.ConfigException: 
> Missing required configuration "key.serializer" which has no default value.
> {code}
> Tested with `kafka-streams` v2.0.0 and v2.2.0.
>  
> Stack trace:
> {code:java}
> Exception in thread "main" org.apache.kafka.common.config.ConfigException: 
> Missing required configuration "key.serializer" which has no default value.
>  at org.apache.kafka.common.config.ConfigDef.parseValue(ConfigDef.java:474)
>  at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:464)
>  at 
> org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:62)
>  at 
> org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:75)
>  at 
> org.apache.kafka.clients.producer.ProducerConfig.(ProducerConfig.java:392)
>  at 
> org.apache.kafka.streams.StreamsConfig.getMainConsumerConfigs(StreamsConfig.java:1014)
>  at 
> org.apache.kafka.streams.processor.internals.StreamThread.create(StreamThread.java:666)
>  at org.apache.kafka.streams.KafkaStreams.(KafkaStreams.java:718)
>  at org.apache.kafka.streams.KafkaStreams.(KafkaStreams.java:634)
>  at org.apache.kafka.streams.KafkaStreams.(KafkaStreams.java:544)
>  at app.Main.main(Main.java:36)
> {code}
> A demo application simulating the exception:
> https://github.com/razorcd/java-snippets-and-demo-projects/tree/master/kafkastreamsdemo
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8157) Missing "key.serializer" exception when setting "segment index bytes"

2019-03-26 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16801865#comment-16801865
 ] 

Matthias J. Sax commented on KAFKA-8157:


This might affect older versions than 2.2.0, too. We should double check all 
older versions and fix in all.

> Missing "key.serializer" exception when setting "segment index bytes"
> -
>
> Key: KAFKA-8157
> URL: https://issues.apache.org/jira/browse/KAFKA-8157
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.2.0
> Environment: ubuntu 18.10, localhost and Aiven too
>Reporter: Cristian D
>Priority: Major
>  Labels: beginner, newbie
>
> As a `kafka-streams` user,
> When I set the "segment index bytes" property
> Then I would like to have internal topics with the specified allocated disk 
> space
>  
> At the moment, when setting the "topic.segment.index.bytes" property, the 
> application is exiting with following exception: 
> {code:java}
> Exception in thread "main" org.apache.kafka.common.config.ConfigException: 
> Missing required configuration "key.serializer" which has no default value.
> {code}
> Tested with `kafka-streams` v2.0.0 and v2.2.0.
>  
> Stack trace:
> {code:java}
> Exception in thread "main" org.apache.kafka.common.config.ConfigException: 
> Missing required configuration "key.serializer" which has no default value.
>  at org.apache.kafka.common.config.ConfigDef.parseValue(ConfigDef.java:474)
>  at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:464)
>  at 
> org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:62)
>  at 
> org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:75)
>  at 
> org.apache.kafka.clients.producer.ProducerConfig.(ProducerConfig.java:392)
>  at 
> org.apache.kafka.streams.StreamsConfig.getMainConsumerConfigs(StreamsConfig.java:1014)
>  at 
> org.apache.kafka.streams.processor.internals.StreamThread.create(StreamThread.java:666)
>  at org.apache.kafka.streams.KafkaStreams.(KafkaStreams.java:718)
>  at org.apache.kafka.streams.KafkaStreams.(KafkaStreams.java:634)
>  at org.apache.kafka.streams.KafkaStreams.(KafkaStreams.java:544)
>  at app.Main.main(Main.java:36)
> {code}
> A demo application simulating the exception:
> https://github.com/razorcd/java-snippets-and-demo-projects/tree/master/kafkastreamsdemo
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-1149) Please delete old releases from mirroring system

2019-03-26 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-1149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16802035#comment-16802035
 ] 

Matthias J. Sax commented on KAFKA-1149:


Thanks [~s...@apache.org] – just talked to [~guozhang] about this like 30 
minutes ago.

Also cf https://github.com/apache/kafka-site/pull/194

> Please delete old releases from mirroring system
> 
>
> Key: KAFKA-1149
> URL: https://issues.apache.org/jira/browse/KAFKA-1149
> Project: Kafka
>  Issue Type: Bug
> Environment: http://www.apache.org/dist/kafka/old_releases/
>Reporter: Sebb
>Priority: Major
>
> To reduce the load on the ASF mirrors, projects are required to delete old 
> releases [1]
> Please can you remove all non-current releases?
> Thanks!
> [Note that older releases are always available from the ASF archive server]
> [1] http://www.apache.org/dev/release.html#when-to-archive



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-1149) Please delete old releases from mirroring system

2019-03-26 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-1149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax reassigned KAFKA-1149:
--

Assignee: Guozhang Wang

> Please delete old releases from mirroring system
> 
>
> Key: KAFKA-1149
> URL: https://issues.apache.org/jira/browse/KAFKA-1149
> Project: Kafka
>  Issue Type: Bug
> Environment: http://www.apache.org/dist/kafka/old_releases/
>Reporter: Sebb
>Assignee: Guozhang Wang
>Priority: Major
>
> To reduce the load on the ASF mirrors, projects are required to delete old 
> releases [1]
> Please can you remove all non-current releases?
> Thanks!
> [Note that older releases are always available from the ASF archive server]
> [1] http://www.apache.org/dev/release.html#when-to-archive



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6399) Consider reducing "max.poll.interval.ms" default for Kafka Streams

2019-03-26 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16802260#comment-16802260
 ] 

Matthias J. Sax commented on KAFKA-6399:


I think, 5 minutes is used for other default timeouts, too. So I think it makes 
sense. Changing default configs requires a KIP, right?

> Consider reducing "max.poll.interval.ms" default for Kafka Streams
> --
>
> Key: KAFKA-6399
> URL: https://issues.apache.org/jira/browse/KAFKA-6399
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 1.0.0
>Reporter: Matthias J. Sax
>Assignee: John Roesler
>Priority: Minor
>
> In Kafka {{0.10.2.1}} we change the default value of 
> {{max.poll.intervall.ms}} for Kafka Streams to {{Integer.MAX_VALUE}}. The 
> reason was that long state restore phases during rebalance could yield 
> "rebalance storms" as consumers drop out of a consumer group even if they are 
> healthy as they didn't call {{poll()}} during state restore phase.
> In version {{0.11}} and {{1.0}} the state restore logic was improved a lot 
> and thus, now Kafka Streams does call {{poll()}} even during restore phase. 
> Therefore, we might consider setting a smaller timeout for 
> {{max.poll.intervall.ms}} to detect bad behaving Kafka Streams applications 
> (ie, targeting user code) that don't make progress any more during regular 
> operations.
> The open question would be, what a good default might be. Maybe the actual 
> consumer default of 30 seconds might be sufficient. During one {{poll()}} 
> roundtrip, we would only call {{restoreConsumer.poll()}} once and restore a 
> single batch of records. This should take way less time than 30 seconds.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8159) Multi-key range queries with negative keyFrom results in unexpected behavior

2019-03-26 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-8159:
---
Labels:   (was: streams)

> Multi-key range queries with negative keyFrom results in unexpected behavior
> 
>
> Key: KAFKA-8159
> URL: https://issues.apache.org/jira/browse/KAFKA-8159
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Sophie Blee-Goldman
>Priority: Major
>
> If a user creates a queryable state store using one of the signed built-in 
> serdes (eg Integer) for the key, there is nothing preventing records with 
> negative keys from being inserted and/or fetched individually. However if the 
> user tries to query the store for a range of keys starting with a negative 
> number, unexpected behavior results that is store-specific.
>  
> For RocksDB stores with caching disabled, Streams will silently miss and 
> negative keys and return those from the range [0, keyTo]. 
> For in-memory stores and ANY store with caching enabled, Streams will throw 
> an unchecked exception and crash.
>  
> This situation should be handled more gracefully, or users should be informed 
> of this limitation and the result should at least be consist across types of 
> store.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8159) Multi-key range queries with negative keyFrom results in unexpected behavior

2019-03-26 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-8159:
---
Component/s: streams

> Multi-key range queries with negative keyFrom results in unexpected behavior
> 
>
> Key: KAFKA-8159
> URL: https://issues.apache.org/jira/browse/KAFKA-8159
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Sophie Blee-Goldman
>Priority: Major
>  Labels: streams
>
> If a user creates a queryable state store using one of the signed built-in 
> serdes (eg Integer) for the key, there is nothing preventing records with 
> negative keys from being inserted and/or fetched individually. However if the 
> user tries to query the store for a range of keys starting with a negative 
> number, unexpected behavior results that is store-specific.
>  
> For RocksDB stores with caching disabled, Streams will silently miss and 
> negative keys and return those from the range [0, keyTo]. 
> For in-memory stores and ANY store with caching enabled, Streams will throw 
> an unchecked exception and crash.
>  
> This situation should be handled more gracefully, or users should be informed 
> of this limitation and the result should at least be consist across types of 
> store.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-8169) Wrong topic on streams quick start documentation

2019-04-08 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax reassigned KAFKA-8169:
--

Assignee: Tcheutchoua Steve

> Wrong topic on streams quick start documentation
> 
>
> Key: KAFKA-8169
> URL: https://issues.apache.org/jira/browse/KAFKA-8169
> Project: Kafka
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.1.1
>Reporter: Tcheutchoua Steve
>Assignee: Tcheutchoua Steve
>Priority: Minor
>  Labels: documentation
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> [Kafka Streams Quick 
> Start|[https://kafka.apache.org/21/documentation/streams/quickstart]]
> Though out the tutorial, the name of the input topic that was created is 
> `streams-plaintext-input`. However, this was mistaken at some point in the 
> tutorial and changed to `streams-wordcount-input`.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-8169) Wrong topic on streams quick start documentation

2019-04-08 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-8169.

   Resolution: Fixed
Fix Version/s: 2.3.0

[~tcheutchoua], I added you to the list of contributors and assigned the ticket 
to you. Thanks for the fix!

> Wrong topic on streams quick start documentation
> 
>
> Key: KAFKA-8169
> URL: https://issues.apache.org/jira/browse/KAFKA-8169
> Project: Kafka
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.1.1
>Reporter: Tcheutchoua Steve
>Assignee: Tcheutchoua Steve
>Priority: Minor
>  Labels: documentation
> Fix For: 2.3.0
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> [Kafka Streams Quick 
> Start|[https://kafka.apache.org/21/documentation/streams/quickstart]]
> Though out the tutorial, the name of the input topic that was created is 
> `streams-plaintext-input`. However, this was mistaken at some point in the 
> tutorial and changed to `streams-wordcount-input`.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8169) Wrong topic on streams quick start documentation

2019-04-08 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-8169:
---
Component/s: streams

> Wrong topic on streams quick start documentation
> 
>
> Key: KAFKA-8169
> URL: https://issues.apache.org/jira/browse/KAFKA-8169
> Project: Kafka
>  Issue Type: Improvement
>  Components: documentation, streams
>Affects Versions: 2.1.1
>Reporter: Tcheutchoua Steve
>Assignee: Tcheutchoua Steve
>Priority: Minor
>  Labels: documentation
> Fix For: 2.3.0
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> [Kafka Streams Quick 
> Start|[https://kafka.apache.org/21/documentation/streams/quickstart]]
> Though out the tutorial, the name of the input topic that was created is 
> `streams-plaintext-input`. However, this was mistaken at some point in the 
> tutorial and changed to `streams-wordcount-input`.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8198) KStreams testing docs use non-existent method "pipe"

2019-04-08 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-8198:
---
Component/s: streams
 documentation

> KStreams testing docs use non-existent method "pipe"
> 
>
> Key: KAFKA-8198
> URL: https://issues.apache.org/jira/browse/KAFKA-8198
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation, streams
>Affects Versions: 1.1.1, 2.0.1, 2.2.0, 2.1.1
>Reporter: Michael Drogalis
>Priority: Minor
>  Labels: documentation, newbie
>
> In [the testing docs for 
> KStreams|https://kafka.apache.org/20/documentation/streams/developer-guide/testing.html],
>  we use the following code snippet:
> {code:java}
> ConsumerRecordFactory factory = new 
> ConsumerRecordFactory<>("input-topic", new StringSerializer(), new 
> IntegerSerializer());
> testDriver.pipe(factory.create("key", 42L));
> {code}
> As of Apache Kafka 2.2.0, this method no longer exists. We should correct the 
> docs to use the pipeInput method.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-8181) Streams docs on serialization include Avro header, but no content

2019-04-08 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-8181.

   Resolution: Fixed
Fix Version/s: 2.3.0

> Streams docs on serialization include Avro header, but no content
> -
>
> Key: KAFKA-8181
> URL: https://issues.apache.org/jira/browse/KAFKA-8181
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.2.0
>Reporter: Michael Drogalis
>Priority: Minor
>  Labels: documentation
> Fix For: 2.3.0
>
>
> On [the documentation for data types and 
> serialization|https://kafka.apache.org/10/documentation/streams/developer-guide/datatypes.html],
>  Avro is listed in the table of contents as something supported out of the 
> box. The link is dead, though, because there is no content. We should either 
> remove the header or supply the content.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8193) Flaky Test MetricsIntegrationTest#testStreamMetricOfWindowStore

2019-04-08 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-8193:
---
Component/s: (was: admin)
 streams

> Flaky Test MetricsIntegrationTest#testStreamMetricOfWindowStore
> ---
>
> Key: KAFKA-8193
> URL: https://issues.apache.org/jira/browse/KAFKA-8193
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Affects Versions: 2.3.0
>Reporter: Konstantine Karantasis
>Assignee: Guozhang Wang
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/3576/console]
>  *14:14:48* org.apache.kafka.streams.integration.MetricsIntegrationTest > 
> testStreamMetricOfWindowStore STARTED
> *14:14:59* 
> org.apache.kafka.streams.integration.MetricsIntegrationTest.testStreamMetricOfWindowStore
>  failed, log available in 
> /home/jenkins/jenkins-slave/workspace/kafka-pr-jdk11-scala2.12/streams/build/reports/testOutput/org.apache.kafka.streams.integration.MetricsIntegrationTest.testStreamMetricOfWindowStore.test.stdout
> *14:14:59* 
> *14:14:59* org.apache.kafka.streams.integration.MetricsIntegrationTest > 
> testStreamMetricOfWindowStore FAILED
> *14:14:59* java.lang.AssertionError: Condition not met within timeout 1. 
> testStoreMetricWindow -> Size of metrics of type:'put-latency-avg' must be 
> equal to:2 but it's equal to 0 expected:<2> but was:<0>
> *14:14:59* at 
> org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:361)
> *14:14:59* at 
> org.apache.kafka.streams.integration.MetricsIntegrationTest.testStreamMetricOfWindowStore(MetricsIntegrationTest.java:260)
> *14:15:01*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8194) MessagesInPerSec incorrect value when Stream produce messages

2019-04-08 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16812790#comment-16812790
 ] 

Matthias J. Sax commented on KAFKA-8194:


Seems the Jira title is miss leading? Can we update it? It's not a Kafka 
Streams issue.

> MessagesInPerSec incorrect value when Stream produce messages
> -
>
> Key: KAFKA-8194
> URL: https://issues.apache.org/jira/browse/KAFKA-8194
> Project: Kafka
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 1.1.0, 2.2.0
>Reporter: Odyldzhon Toshbekov
>Priority: Trivial
> Attachments: Screen Shot 2019-04-05 at 17.51.03.png, Screen Shot 
> 2019-04-05 at 17.52.22.png
>
>
> Looks like metric
> {code:java}
> kafka.server:type=BrokerTopicMetrics,name=MessagesInPerSec{code}
> has incorrect value when messages come via Kafka Stream API.
> I noticed that offset for every message from Kafka Stream can be increased by 
> 1,2,... However if messages come to Broker from Kafka producer it's always 
> incremented by 1.
> Unfortunately the metric mentioned above calculated based on offset changes 
> and as result we cannot use streams because metric will be always incorrect.
> For Kafka 2.2.0
> !Screen Shot 2019-04-05 at 17.51.03.png|width=100%!
>  
> [https://github.com/apache/kafka/blob/2.2.0/core/src/main/scala/kafka/server/ReplicaManager.scala]
> And this is the method used to get "numAppendedMessages"
>  !Screen Shot 2019-04-05 at 17.52.22.png|width=100%!
> https://github.com/apache/kafka/blob/2.2.0/core/src/main/scala/kafka/log/Log.scala



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8174) Can't call arbitrary SimpleBenchmarks tests from streams_simple_benchmark_test.py

2019-04-08 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-8174:
---
Issue Type: Improvement  (was: Bug)

> Can't call arbitrary SimpleBenchmarks tests from 
> streams_simple_benchmark_test.py
> -
>
> Key: KAFKA-8174
> URL: https://issues.apache.org/jira/browse/KAFKA-8174
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams, system tests
>Reporter: Sophie Blee-Goldman
>Priority: Minor
>
> When using the script streams_simple_benchmark_test.py you should be able to 
> specify a test name and run that particular method in SimpleBenchmarks. This 
> works for most existing benchmarks, however you can't use this to run the 
> "yahoo" benchmark and you can't add new tests to SimpleBenchmarks and start 
> them successfully. 
>  
> If you try to run yahoo/new test it fails with the error "Not enough 
> parameters are provided; expecting propFileName, testName, numRecords, 
> keySkew, valueSize" in main(); the missing argument turns out to be testName.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8174) Can't call arbitrary SimpleBenchmarks tests from streams_simple_benchmark_test.py

2019-04-08 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-8174:
---
Component/s: system tests
 streams

> Can't call arbitrary SimpleBenchmarks tests from 
> streams_simple_benchmark_test.py
> -
>
> Key: KAFKA-8174
> URL: https://issues.apache.org/jira/browse/KAFKA-8174
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, system tests
>Reporter: Sophie Blee-Goldman
>Priority: Minor
>
> When using the script streams_simple_benchmark_test.py you should be able to 
> specify a test name and run that particular method in SimpleBenchmarks. This 
> works for most existing benchmarks, however you can't use this to run the 
> "yahoo" benchmark and you can't add new tests to SimpleBenchmarks and start 
> them successfully. 
>  
> If you try to run yahoo/new test it fails with the error "Not enough 
> parameters are provided; expecting propFileName, testName, numRecords, 
> keySkew, valueSize" in main(); the missing argument turns out to be testName.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6399) Consider reducing "max.poll.interval.ms" default for Kafka Streams

2019-04-08 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-6399:
---
Fix Version/s: 2.3.0

> Consider reducing "max.poll.interval.ms" default for Kafka Streams
> --
>
> Key: KAFKA-6399
> URL: https://issues.apache.org/jira/browse/KAFKA-6399
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 1.0.0
>Reporter: Matthias J. Sax
>Assignee: John Roesler
>Priority: Minor
> Fix For: 2.3.0
>
>
> In Kafka {{0.10.2.1}} we change the default value of 
> {{max.poll.intervall.ms}} for Kafka Streams to {{Integer.MAX_VALUE}}. The 
> reason was that long state restore phases during rebalance could yield 
> "rebalance storms" as consumers drop out of a consumer group even if they are 
> healthy as they didn't call {{poll()}} during state restore phase.
> In version {{0.11}} and {{1.0}} the state restore logic was improved a lot 
> and thus, now Kafka Streams does call {{poll()}} even during restore phase. 
> Therefore, we might consider setting a smaller timeout for 
> {{max.poll.intervall.ms}} to detect bad behaving Kafka Streams applications 
> (ie, targeting user code) that don't make progress any more during regular 
> operations.
> The open question would be, what a good default might be. Maybe the actual 
> consumer default of 30 seconds might be sufficient. During one {{poll()}} 
> roundtrip, we would only call {{restoreConsumer.poll()}} once and restore a 
> single batch of records. This should take way less time than 30 seconds.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-8181) Streams docs on serialization include Avro header, but no content

2019-04-08 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax reassigned KAFKA-8181:
--

Assignee: Victoria Bialas

> Streams docs on serialization include Avro header, but no content
> -
>
> Key: KAFKA-8181
> URL: https://issues.apache.org/jira/browse/KAFKA-8181
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.2.0
>Reporter: Michael Drogalis
>Assignee: Victoria Bialas
>Priority: Minor
>  Labels: documentation
> Fix For: 2.3.0
>
>
> On [the documentation for data types and 
> serialization|https://kafka.apache.org/10/documentation/streams/developer-guide/datatypes.html],
>  Avro is listed in the table of contents as something supported out of the 
> box. The link is dead, though, because there is no content. We should either 
> remove the header or supply the content.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8200) TopologyTestDriver should offer an iterable signature of readOutput

2019-04-08 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-8200:
---
Component/s: streams

> TopologyTestDriver should offer an iterable signature of readOutput
> ---
>
> Key: KAFKA-8200
> URL: https://issues.apache.org/jira/browse/KAFKA-8200
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Michael Drogalis
>Priority: Minor
>
> When using the TopologyTestDriver, one examines the output on a topic with 
> the readOutput method. This method returns one record at a time, until no 
> more records can be found, at which point in returns null.
> Many times, the usage pattern around readOutput will involve writing a loop 
> to extract a number of records from the topic, building up a list of records, 
> until it returns null.
> It would be helpful to offer an iterable signature of readOutput, which 
> returns either an iterator or list over the records that are currently 
> available in the topic. This would effectively remove the loop that a user 
> needs to write for him/herself each time.
> Such a signature might look like:
> {code:java}
> public Iterable> readOutput(java.lang.String 
> topic);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8200) TopologyTestDriver should offer an iterable signature of readOutput

2019-04-08 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-8200:
---
Labels: needs-kip  (was: )

> TopologyTestDriver should offer an iterable signature of readOutput
> ---
>
> Key: KAFKA-8200
> URL: https://issues.apache.org/jira/browse/KAFKA-8200
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Michael Drogalis
>Priority: Minor
>  Labels: needs-kip
>
> When using the TopologyTestDriver, one examines the output on a topic with 
> the readOutput method. This method returns one record at a time, until no 
> more records can be found, at which point in returns null.
> Many times, the usage pattern around readOutput will involve writing a loop 
> to extract a number of records from the topic, building up a list of records, 
> until it returns null.
> It would be helpful to offer an iterable signature of readOutput, which 
> returns either an iterator or list over the records that are currently 
> available in the topic. This would effectively remove the loop that a user 
> needs to write for him/herself each time.
> Such a signature might look like:
> {code:java}
> public Iterable> readOutput(java.lang.String 
> topic);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6399) Consider reducing "max.poll.interval.ms" default for Kafka Streams

2019-04-08 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-6399:
---
Description: 
In Kafka {{0.10.2.1}} we change the default value of {{max.poll.intervall.ms}} 
for Kafka Streams to {{Integer.MAX_VALUE}}. The reason was that long state 
restore phases during rebalance could yield "rebalance storms" as consumers 
drop out of a consumer group even if they are healthy as they didn't call 
{{poll()}} during state restore phase.

In version {{0.11}} and {{1.0}} the state restore logic was improved a lot and 
thus, now Kafka Streams does call {{poll()}} even during restore phase. 
Therefore, we might consider setting a smaller timeout for 
{{max.poll.intervall.ms}} to detect bad behaving Kafka Streams applications 
(ie, targeting user code) that don't make progress any more during regular 
operations.

The open question would be, what a good default might be. Maybe the actual 
consumer default of 30 seconds might be sufficient. During one {{poll()}} 
roundtrip, we would only call {{restoreConsumer.poll()}} once and restore a 
single batch of records. This should take way less time than 30 seconds.

KIP-442: 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-442%3A+Return+to+default+max+poll+interval+in+Streams]

  was:
In Kafka {{0.10.2.1}} we change the default value of {{max.poll.intervall.ms}} 
for Kafka Streams to {{Integer.MAX_VALUE}}. The reason was that long state 
restore phases during rebalance could yield "rebalance storms" as consumers 
drop out of a consumer group even if they are healthy as they didn't call 
{{poll()}} during state restore phase.

In version {{0.11}} and {{1.0}} the state restore logic was improved a lot and 
thus, now Kafka Streams does call {{poll()}} even during restore phase. 
Therefore, we might consider setting a smaller timeout for 
{{max.poll.intervall.ms}} to detect bad behaving Kafka Streams applications 
(ie, targeting user code) that don't make progress any more during regular 
operations.

The open question would be, what a good default might be. Maybe the actual 
consumer default of 30 seconds might be sufficient. During one {{poll()}} 
roundtrip, we would only call {{restoreConsumer.poll()}} once and restore a 
single batch of records. This should take way less time than 30 seconds.


> Consider reducing "max.poll.interval.ms" default for Kafka Streams
> --
>
> Key: KAFKA-6399
> URL: https://issues.apache.org/jira/browse/KAFKA-6399
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 1.0.0
>Reporter: Matthias J. Sax
>Assignee: John Roesler
>Priority: Minor
>  Labels: kip
> Fix For: 2.3.0
>
>
> In Kafka {{0.10.2.1}} we change the default value of 
> {{max.poll.intervall.ms}} for Kafka Streams to {{Integer.MAX_VALUE}}. The 
> reason was that long state restore phases during rebalance could yield 
> "rebalance storms" as consumers drop out of a consumer group even if they are 
> healthy as they didn't call {{poll()}} during state restore phase.
> In version {{0.11}} and {{1.0}} the state restore logic was improved a lot 
> and thus, now Kafka Streams does call {{poll()}} even during restore phase. 
> Therefore, we might consider setting a smaller timeout for 
> {{max.poll.intervall.ms}} to detect bad behaving Kafka Streams applications 
> (ie, targeting user code) that don't make progress any more during regular 
> operations.
> The open question would be, what a good default might be. Maybe the actual 
> consumer default of 30 seconds might be sufficient. During one {{poll()}} 
> roundtrip, we would only call {{restoreConsumer.poll()}} once and restore a 
> single batch of records. This should take way less time than 30 seconds.
> KIP-442: 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-442%3A+Return+to+default+max+poll+interval+in+Streams]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6399) Consider reducing "max.poll.interval.ms" default for Kafka Streams

2019-04-08 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-6399:
---
Labels: kip  (was: )

> Consider reducing "max.poll.interval.ms" default for Kafka Streams
> --
>
> Key: KAFKA-6399
> URL: https://issues.apache.org/jira/browse/KAFKA-6399
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 1.0.0
>Reporter: Matthias J. Sax
>Assignee: John Roesler
>Priority: Minor
>  Labels: kip
> Fix For: 2.3.0
>
>
> In Kafka {{0.10.2.1}} we change the default value of 
> {{max.poll.intervall.ms}} for Kafka Streams to {{Integer.MAX_VALUE}}. The 
> reason was that long state restore phases during rebalance could yield 
> "rebalance storms" as consumers drop out of a consumer group even if they are 
> healthy as they didn't call {{poll()}} during state restore phase.
> In version {{0.11}} and {{1.0}} the state restore logic was improved a lot 
> and thus, now Kafka Streams does call {{poll()}} even during restore phase. 
> Therefore, we might consider setting a smaller timeout for 
> {{max.poll.intervall.ms}} to detect bad behaving Kafka Streams applications 
> (ie, targeting user code) that don't make progress any more during regular 
> operations.
> The open question would be, what a good default might be. Maybe the actual 
> consumer default of 30 seconds might be sufficient. During one {{poll()}} 
> roundtrip, we would only call {{restoreConsumer.poll()}} once and restore a 
> single batch of records. This should take way less time than 30 seconds.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8201) Kafka streams repartitioning topic settings crashing multiple nodes

2019-04-10 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16814974#comment-16814974
 ] 

Matthias J. Sax commented on KAFKA-8201:


{quote}Another question here : Why use infinite retention and cleanup.policy 
delete instead of log compaction for this case?
{quote}
Compaction does not make sense for repartition topics, because they are just 
use to "shuffle" data. We use infinite retention to guard against data loss, in 
case the application is offline for a longer period of time. Internally, Kafka 
Streams uses purge-data API to delete processed data to avoid that the 
repartition topics grow unbounded.

> Kafka streams repartitioning topic settings crashing multiple nodes
> ---
>
> Key: KAFKA-8201
> URL: https://issues.apache.org/jira/browse/KAFKA-8201
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.0.0
>Reporter: Anders Aagaard
>Priority: Major
>
> We had an incident in a setup using kafka streams version 2.0.0 and kafka 
> version 2.0.0 protocol version 2.0-IV1. The reason for it is a combination of 
> kafka streams defaults and a bug in kafka.
> Info about the setup: Streams application reading a log compacted input 
> topic, and performing a groupby operation requiring repartitioning.
> Kafka streams automatically creates a repartitioning topic with 24 partitions 
> and the following options:
> segment.bytes=52428800, retention.ms=9223372036854775807, 
> segment.index.bytes=52428800, cleanup.policy=delete, segment.ms=60.
>  
> This should mean we roll out a new segment when the active one reaches 50mb 
> or is older than 10 mniutes. However, the different timestamps coming into 
> the topic due to log compaction (sometimes varying in multiple days) means 
> the server will see a message which is older than segments.ms and 
> automatically trigger a new segment roll out. This causes a segment 
> explosion. Where new segments are continuously rolled out.
> There seems to be a bug report for this server side here : 
> https://issues.apache.org/jira/browse/KAFKA-4336.
> This effectively took down several nodes and a broker in our cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8207) StickyPartitionAssignor for KStream

2019-04-10 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16814976#comment-16814976
 ] 

Matthias J. Sax commented on KAFKA-8207:


That is by design. Kafka Streams' internal "StreamPartitionAssignor" also 
implements a "sticky" policy. I think we can close this ticket as invalid.

What do you try to achieve and why do you want to change the partition assignor?

> StickyPartitionAssignor for KStream
> ---
>
> Key: KAFKA-8207
> URL: https://issues.apache.org/jira/browse/KAFKA-8207
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 2.0.0
>Reporter: neeraj
>Priority: Major
>
> In KStreams I am not able to give a sticky partition assignor or my custom 
> partition assignor.
> Overriding the property while building stream does not work
> streams props.put(ConsumerConfig.PARTITION_ASSIGNMENT_STRATEGY_CONFIG, 
> CustomAssignor.class.getName());
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8214) Handling RecordTooLargeException in the main thread

2019-04-10 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16815097#comment-16815097
 ] 

Matthias J. Sax commented on KAFKA-8214:


Jira is not the way to ask question. You should rather use the user mailing 
list.

To handle this exception, you should upgrade to 1.1 and use 
`ProductionExceptionHandler`. Compare: 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-210+-+Provide+for+custom+error+handling++when+Kafka+Streams+fails+to+produce]

I think we can close this ticket.

> Handling RecordTooLargeException in the main thread
> ---
>
> Key: KAFKA-8214
> URL: https://issues.apache.org/jira/browse/KAFKA-8214
> Project: Kafka
>  Issue Type: Bug
> Environment: Kafka version 1.0.2
>Reporter: Mohan Parthasarathy
>Priority: Major
>
> How can we handle this exception in the main application ? If this task 
> incurs this exception, then it does not commit the offset and hence it goes 
> in a loop after that. This happens during aggregation process. We already 
> have a limit on the message size of the topic which is 15 MB.
> org.apache.kafka.streams.errors.StreamsException: Exception caught in 
> process. taskId=2_6, processor=KSTREAM-SOURCE-16, 
> topic=r-detection-KSTREAM-AGGREGATE-STATE-STORE-12-repartition, 
> partition=6, offset=2049
>     at 
> org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:367)
>   
>      
>     at 
> org.apache.kafka.streams.processor.internals.AssignedStreamsTasks.process(AssignedStreamsTasks.java:104)
>   
>  
>     at 
> org.apache.kafka.streams.processor.internals.TaskManager.process(TaskManager.java:413)
>   
>    
>     at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:862)
>   
>      
>     at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:777)
>   
>      
>     at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:747)
>  
> Caused by: org.apache.kafka.streams.errors.StreamsException: task [2_6] Abort 
> sending since an error caught with a previous record (key 
> fe80::a112:a206:bc15:8e86&fe80::743c:160:c0be:9e66&0 value [B@20dced9e 
> timestamp 1554238297629) to topic 
> -detection-KSTREAM-AGGREGATE-STATE-STORE-12-changelog due to 
> org.apache.kafka.common.errors.RecordTooLargeException: The message is 
> 15728866 bytes when serialized which is larger than the maximum request size 
> you have configured with the max.request.size configuration.  
>     
>     at 
> org.apache.kafka.streams.processor.internals.RecordCollectorImpl.recordSendError(RecordCollectorImpl.java:133)
>      
>     at 
> org.apache.kafka.streams.processor.internals.RecordCollectorImpl.access$500(RecordCollectorImpl.java:50)
>   
>  
>     at 
> org.apache.kafka.streams.processor.internals.RecordCollectorImpl$1.onCompletion(RecordCollectorImpl.java:192)
>   
>     at 
> org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:915)
>   
>    
>     at 
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:841)  
>   
>    
>     at 
> org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:162)
>   
>   
>     at 
> org.apache.kafka.streams.state.internals.StoreChangeLogger.logChange(StoreChangeLogger.java:59)
>   
>   
>     at 
> org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStore.put(ChangeLoggingKeyValueBytesStore.java:66)
>   
>     at 
> org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesSt

[jira] [Assigned] (KAFKA-8208) Broken link for out-of-order data in KStreams Core Concepts doc

2019-04-10 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax reassigned KAFKA-8208:
--

Assignee: Matthias J. Sax  (was: Bill Bejeck)

> Broken link for out-of-order data in KStreams Core Concepts doc
> ---
>
> Key: KAFKA-8208
> URL: https://issues.apache.org/jira/browse/KAFKA-8208
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation
>Reporter: Michael Drogalis
>Assignee: Matthias J. Sax
>Priority: Minor
>
> In the [core concepts 
> doc|https://kafka.apache.org/21/documentation/streams/core-concepts], there 
> is a link in the "Out-of-Order Handling" section for "out-of-order data". It 
> 404's to https://kafka.apache.org/21/documentation/streams/tbd.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   3   4   5   6   7   8   9   10   >