[jira] [Reopened] (KAFKA-3246) Transient Failure in kafka.producer.SyncProducerTest.testProduceCorrectlyReceivesResponse

2019-02-28 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax reopened KAFKA-3246:


Happend again: 
[https://builds.apache.org/blue/organizations/jenkins/kafka-1.1-jdk7/detail/kafka-1.1-jdk7/248/tests]
{quote}java.lang.AssertionError: expected: but 
was:
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:144)
at 
kafka.producer.SyncProducerTest.testProduceCorrectlyReceivesResponse(SyncProducerTest.scala:181){quote}

> Transient Failure in 
> kafka.producer.SyncProducerTest.testProduceCorrectlyReceivesResponse
> -
>
> Key: KAFKA-3246
> URL: https://issues.apache.org/jira/browse/KAFKA-3246
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Priority: Major
>  Labels: transient-unit-test-failure
>
> {code}
> java.lang.AssertionError: expected:<0> but was:<3>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at org.junit.Assert.assertEquals(Assert.java:631)
>   at 
> kafka.producer.SyncProducerTest.testProduceCorrectlyReceivesResponse(SyncProducerTest.scala:182)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:105)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:56)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:64)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:50)
>   at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:106)
>   at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> 

[jira] [Created] (KAFKA-8022) Flaky Test RequestQuotaTest#testExemptRequestTime

2019-02-28 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-8022:
--

 Summary: Flaky Test RequestQuotaTest#testExemptRequestTime
 Key: KAFKA-8022
 URL: https://issues.apache.org/jira/browse/KAFKA-8022
 Project: Kafka
  Issue Type: Bug
  Components: core, unit tests
Affects Versions: 0.11.0.3
Reporter: Matthias J. Sax


[https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk11/detail/kafka-trunk-jdk11/328/tests]
{quote}kafka.zookeeper.ZooKeeperClientTimeoutException: Timed out waiting for 
connection while in state: CONNECTING
at 
kafka.zookeeper.ZooKeeperClient.$anonfun$waitUntilConnected$3(ZooKeeperClient.scala:242)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:253)
at kafka.zookeeper.ZooKeeperClient.waitUntilConnected(ZooKeeperClient.scala:238)
at kafka.zookeeper.ZooKeeperClient.(ZooKeeperClient.scala:96)
at kafka.zk.KafkaZkClient$.apply(KafkaZkClient.scala:1825)
at kafka.zk.ZooKeeperTestHarness.setUp(ZooKeeperTestHarness.scala:59)
at 
kafka.integration.KafkaServerTestHarness.setUp(KafkaServerTestHarness.scala:90)
at kafka.api.IntegrationTestHarness.doSetup(IntegrationTestHarness.scala:81)
at kafka.api.IntegrationTestHarness.setUp(IntegrationTestHarness.scala:73)
at kafka.server.RequestQuotaTest.setUp(RequestQuotaTest.scala:81){quote}
STDOUT:
{quote}[2019-03-01 00:40:47,090] ERROR [KafkaApi-0] Error when handling 
request: clientId=unauthorized-CONTROLLED_SHUTDOWN, correlationId=1, 
api=CONTROLLED_SHUTDOWN, body=\{broker_id=0,broker_epoch=9223372036854775807} 
(kafka.server.KafkaApis:76)
org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
Request(processor=2, connectionId=127.0.0.1:37894-127.0.0.1:54838-2, 
session=Session(User:Unauthorized,/127.0.0.1), 
listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, buffer=null) 
is not authorized.
[2019-03-01 00:40:47,090] ERROR [KafkaApi-0] Error when handling request: 
clientId=unauthorized-STOP_REPLICA, correlationId=1, api=STOP_REPLICA, 
body=\{controller_id=0,controller_epoch=2147483647,broker_epoch=9223372036854775807,delete_partitions=true,partitions=[{topic=topic-1,partition_ids=[0]}]}
 (kafka.server.KafkaApis:76)
org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
Request(processor=0, connectionId=127.0.0.1:37894-127.0.0.1:54822-1, 
session=Session(User:Unauthorized,/127.0.0.1), 
listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, buffer=null) 
is not authorized.
[2019-03-01 00:40:47,091] ERROR [KafkaApi-0] Error when handling request: 
clientId=unauthorized-UPDATE_METADATA, correlationId=1, api=UPDATE_METADATA, 
body=\{controller_id=0,controller_epoch=2147483647,broker_epoch=9223372036854775807,topic_states=[{topic=topic-1,partition_states=[{partition=0,controller_epoch=2147483647,leader=0,leader_epoch=2147483647,isr=[0],zk_version=2,replicas=[0],offline_replicas=[]}]}],live_brokers=[\{id=0,end_points=[{port=0,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
 (kafka.server.KafkaApis:76)
org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
Request(processor=1, connectionId=127.0.0.1:37894-127.0.0.1:54836-2, 
session=Session(User:Unauthorized,/127.0.0.1), 
listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, buffer=null) 
is not authorized.
[2019-03-01 00:40:47,090] ERROR [KafkaApi-0] Error when handling request: 
clientId=unauthorized-LEADER_AND_ISR, correlationId=1, api=LEADER_AND_ISR, 
body=\{controller_id=0,controller_epoch=2147483647,broker_epoch=9223372036854775807,topic_states=[{topic=topic-1,partition_states=[{partition=0,controller_epoch=2147483647,leader=0,leader_epoch=2147483647,isr=[0],zk_version=2,replicas=[0],is_new=true}]}],live_leaders=[\{id=0,host=localhost,port=0}]}
 (kafka.server.KafkaApis:76)
org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
Request(processor=0, connectionId=127.0.0.1:37894-127.0.0.1:54834-2, 
session=Session(User:Unauthorized,/127.0.0.1), 
listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, buffer=null) 
is not authorized.
[2019-03-01 00:40:47,106] ERROR [KafkaApi-0] Error when handling request: 
clientId=unauthorized-WRITE_TXN_MARKERS, correlationId=1, 
api=WRITE_TXN_MARKERS, body=\{transaction_markers=[]} 
(kafka.server.KafkaApis:76)
org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
Request(processor=0, connectionId=127.0.0.1:37894-127.0.0.1:54876-9, 
session=Session(User:Unauthorized,/127.0.0.1), 
listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, buffer=null) 
is not authorized.
[2019-03-01 00:40:47,123] ERROR [KafkaApi-0] Error when handling request: 
clientId=unauthorized-DESCRIBE_ACLS, correlationId=1, api=DESCRIBE_ACLS, 

[jira] [Resolved] (KAFKA-6904) DynamicBrokerReconfigurationTest#testAdvertisedListenerUpdate is flaky

2019-02-28 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-6904.

Resolution: Duplicate

> DynamicBrokerReconfigurationTest#testAdvertisedListenerUpdate is flaky
> --
>
> Key: KAFKA-6904
> URL: https://issues.apache.org/jira/browse/KAFKA-6904
> Project: Kafka
>  Issue Type: Test
>  Components: core, unit tests
>Reporter: Ted Yu
>Priority: Critical
>  Labels: flaky-test
>
> From 
> https://builds.apache.org/job/kafka-pr-jdk10-scala2.12/820/testReport/junit/kafka.server/DynamicBrokerReconfigurationTest/testAdvertisedListenerUpdate/
>  :
> {code}
> kafka.server.DynamicBrokerReconfigurationTest.testAdvertisedListenerUpdate
> Failing for the past 1 build (Since Failed#820 )
> Took 21 sec.
> Error Message
> java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is 
> not the leader for that topic-partition.
> Stacktrace
> java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is 
> not the leader for that topic-partition.
>   at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.valueOrError(FutureRecordMetadata.java:94)
>   at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:77)
>   at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:29)
>   at 
> kafka.server.DynamicBrokerReconfigurationTest.$anonfun$verifyProduceConsume$3(DynamicBrokerReconfigurationTest.scala:996)
>   at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:234)
>   at scala.collection.Iterator.foreach(Iterator.scala:944)
>   at scala.collection.Iterator.foreach$(Iterator.scala:944)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1432)
>   at scala.collection.IterableLike.foreach(IterableLike.scala:71)
>   at scala.collection.IterableLike.foreach$(IterableLike.scala:70)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at scala.collection.TraversableLike.map(TraversableLike.scala:234)
>   at scala.collection.TraversableLike.map$(TraversableLike.scala:227)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>   at 
> kafka.server.DynamicBrokerReconfigurationTest.verifyProduceConsume(DynamicBrokerReconfigurationTest.scala:996)
>   at 
> kafka.server.DynamicBrokerReconfigurationTest.testAdvertisedListenerUpdate(DynamicBrokerReconfigurationTest.scala:742)
> {code}
> The above happened with jdk 10.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7541) Transient Failure: kafka.server.DynamicBrokerReconfigurationTest.testUncleanLeaderElectionEnable

2019-02-28 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-7541.

   Resolution: Duplicate
Fix Version/s: (was: 2.2.1)
   (was: 2.3.0)

> Transient Failure: 
> kafka.server.DynamicBrokerReconfigurationTest.testUncleanLeaderElectionEnable
> 
>
> Key: KAFKA-7541
> URL: https://issues.apache.org/jira/browse/KAFKA-7541
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: John Roesler
>Priority: Critical
>  Labels: flaky-test
>
> Observed on Java 11: 
> [https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/264/testReport/junit/kafka.server/DynamicBrokerReconfigurationTest/testUncleanLeaderElectionEnable/]
>  
> Stacktrace:
> {noformat}
> java.lang.AssertionError: Unclean leader not elected
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> kafka.server.DynamicBrokerReconfigurationTest.testUncleanLeaderElectionEnable(DynamicBrokerReconfigurationTest.scala:487)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:106)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
>   at 
> org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:117)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> 

[jira] [Reopened] (KAFKA-7288) Transient failure in SslSelectorTest.testCloseConnectionInClosingState

2019-02-28 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax reopened KAFKA-7288:


Seems not to be fixed: 
[https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/3425/tests]
{quote}java.lang.AssertionError: Channel not expired expected null, but 
was:
at org.junit.Assert.fail(Assert.java:89)
at org.junit.Assert.failNotNull(Assert.java:756)
at org.junit.Assert.assertNull(Assert.java:738)
at 
org.apache.kafka.common.network.SelectorTest.testCloseConnectionInClosingState(SelectorTest.java:343){quote}

> Transient failure in SslSelectorTest.testCloseConnectionInClosingState
> --
>
> Key: KAFKA-7288
> URL: https://issues.apache.org/jira/browse/KAFKA-7288
> Project: Kafka
>  Issue Type: Bug
>  Components: unit tests
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Major
> Fix For: 2.1.0
>
>
> Noticed this failure in SslSelectorTest.testCloseConnectionInClosingState a 
> few times in unit tests in Jenkins:
> {quote}
> java.lang.AssertionError: Channel not expired expected null, but 
> was: at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.failNotNull(Assert.java:755) at 
> org.junit.Assert.assertNull(Assert.java:737) at 
> org.apache.kafka.common.network.SelectorTest.testCloseConnectionInClosingState(SelectorTest.java:341)
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Jenkins build is back to normal : kafka-2.0-jdk8 #233

2019-02-28 Thread Apache Jenkins Server
See 




Fwd: Apache Kafka Memory Leakage???

2019-02-28 Thread Syed Mudassir Ahmed
Thanks,



-- Forwarded message -
From: Syed Mudassir Ahmed 
Date: Tue, Feb 26, 2019 at 12:40 PM
Subject: Apache Kafka Memory Leakage???
To: 
Cc: Syed Mudassir Ahmed 


Hi Team,
  I have a java application based out of latest Apache Kafka version 2.1.1.
  I have a consumer application that runs infinitely to consume messages
whenever produced.
  Sometimes there are no messages produced for hours.  Still, I see that
the memory allocated to consumer program is drastically increasing.
  My code is as follows:

AtomicBoolean isRunning = new AtomicBoolean(true);

Properties kafkaProperties = new Properties();

kafkaProperties.setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, brokers);

kafkaProperties.put(ConsumerConfig.GROUP_ID_CONFIG, groupID);

kafkaProperties.put(ConsumerConfig.CLIENT_ID_CONFIG,
UUID.randomUUID().toString());
kafkaProperties.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
kafkaProperties.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG,
AUTO_OFFSET_RESET_EARLIEST);
consumer = new KafkaConsumer(kafkaProperties,
keyDeserializer, valueDeserializer);
if (topics != null) {
subscribeTopics(topics);
}


boolean infiniteLoop = false;
boolean oneTimeMode = false;
int timeout = consumeTimeout;
if (isSuggest) {
//Configuration for suggest mode
oneTimeMode = true;
msgCount = 0;
timeout = DEFAULT_CONSUME_TIMEOUT_IN_MS;
} else if (msgCount < 0) {
infiniteLoop = true;
} else if (msgCount == 0) {
oneTimeMode = true;
}
Map offsets = Maps.newHashMap();
do {
ConsumerRecords records = consumer.poll(timeout);
for (final ConsumerRecord record : records) {
if (!infiniteLoop && !oneTimeMode) {
--msgCount;
if (msgCount < 0) {
break;
}
}
outputViews.write(new BinaryOutput() {
@Override
public Document getHeader() {
return generateHeader(record, oldHeader);
}

@Override
public void write(WritableByteChannel
writeChannel) throws IOException {
try (OutputStream os =
Channels.newOutputStream(writeChannel)) {
os.write(record.value());
}
}
});
//The offset to commit should be the next offset of
the current one,
// according to the API
offsets.put(new TopicPartition(record.topic(),
record.partition()),
new OffsetAndMetadata(record.offset() + 1));
//In suggest mode, we should not change the current offset
if (isSyncCommit && isSuggest) {
commitOffset(offsets);
offsets.clear();
}
}
 } while ((msgCount > 0 || infiniteLoop) && isRunning.get());


See the screenshot below.  In about nineteen hours, it just consumed 5
messages but the memory allocated is 1.6GB.

[image: image.png]

Any clues on how to get rid of memory issue?  Anything I need to do in
the program or is it a bug in the kafka library?

Please rever ASAP.


Thanks,


Build failed in Jenkins: kafka-2.1-jdk8 #138

2019-02-28 Thread Apache Jenkins Server
See 


Changes:

[bill] KAFKA-8011: Fix for race condition causing concurrent modification

--
[...truncated 1.33 MB...]

kafka.zookeeper.ZooKeeperClientTest > testSetAclNonExistentZNode FAILED
java.lang.NoClassDefFoundError: scala/Product$class
at kafka.zookeeper.SetAclRequest.(ZooKeeperClient.scala:522)
at 
kafka.zookeeper.ZooKeeperClientTest.testSetAclNonExistentZNode(ZooKeeperClientTest.scala:180)

Caused by:
java.lang.ClassNotFoundException: scala.Product$class
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 2 more

kafka.zookeeper.ZooKeeperClientTest > testConnectionLossRequestTermination 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testConnectionLossRequestTermination 
FAILED
java.lang.NoClassDefFoundError: scala/Product$class
at kafka.zookeeper.GetDataRequest.(ZooKeeperClient.scala:510)
at 
kafka.zookeeper.ZooKeeperClientTest$$anonfun$5.apply(ZooKeeperClientTest.scala:445)
at 
kafka.zookeeper.ZooKeeperClientTest$$anonfun$5.apply(ZooKeeperClientTest.scala:445)
at 
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233)
at scala.collection.immutable.Range.foreach(Range.scala:155)
at scala.collection.TraversableLike.map(TraversableLike.scala:233)
at scala.collection.TraversableLike.map$(TraversableLike.scala:226)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at 
kafka.zookeeper.ZooKeeperClientTest.testConnectionLossRequestTermination(ZooKeeperClientTest.scala:445)

Caused by:
java.lang.ClassNotFoundException: scala.Product$class
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 9 more

kafka.zookeeper.ZooKeeperClientTest > testExistsNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testExistsNonExistentZNode FAILED
java.lang.NoClassDefFoundError: scala/Product$class
at kafka.zookeeper.ExistsRequest.(ZooKeeperClient.scala:506)
at 
kafka.zookeeper.ZooKeeperClientTest.testExistsNonExistentZNode(ZooKeeperClientTest.scala:109)

kafka.zookeeper.ZooKeeperClientTest > testGetDataNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetDataNonExistentZNode FAILED
java.lang.NoClassDefFoundError: scala/Product$class
at kafka.zookeeper.GetDataRequest.(ZooKeeperClient.scala:510)
at 
kafka.zookeeper.ZooKeeperClientTest.testGetDataNonExistentZNode(ZooKeeperClientTest.scala:125)

kafka.zookeeper.ZooKeeperClientTest > testConnectionTimeout STARTED

kafka.zookeeper.ZooKeeperClientTest > testConnectionTimeout PASSED

kafka.zookeeper.ZooKeeperClientTest > 
testBlockOnRequestCompletionFromStateChangeHandler STARTED

kafka.zookeeper.ZooKeeperClientTest > 
testBlockOnRequestCompletionFromStateChangeHandler PASSED

kafka.zookeeper.ZooKeeperClientTest > testUnresolvableConnectString STARTED

kafka.zookeeper.ZooKeeperClientTest > testUnresolvableConnectString PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenNonExistentZNode FAILED
java.lang.NoClassDefFoundError: scala/Product$class
at kafka.zookeeper.GetChildrenRequest.(ZooKeeperClient.scala:526)
at 
kafka.zookeeper.ZooKeeperClientTest.testGetChildrenNonExistentZNode(ZooKeeperClientTest.scala:186)

Caused by:
java.lang.ClassNotFoundException: scala.Product$class
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 2 more

kafka.zookeeper.ZooKeeperClientTest > testPipelinedGetData STARTED

kafka.zookeeper.ZooKeeperClientTest > testPipelinedGetData FAILED
java.lang.NoClassDefFoundError: scala/Product$class
at kafka.zookeeper.CreateRequest.(ZooKeeperClient.scala:498)
at 
kafka.zookeeper.ZooKeeperClientTest$$anonfun$2.apply(ZooKeeperClientTest.scala:226)
at 
kafka.zookeeper.ZooKeeperClientTest$$anonfun$2.apply(ZooKeeperClientTest.scala:226)
at 
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233)
at scala.collection.immutable.Range.foreach(Range.scala:155)
at 

Jenkins build is back to normal : kafka-trunk-jdk8 #3427

2019-02-28 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-8021) KafkaProducer.flush() can show unexpected behavior when a batch is split

2019-02-28 Thread Abhishek Mendhekar (JIRA)
Abhishek Mendhekar created KAFKA-8021:
-

 Summary: KafkaProducer.flush() can show unexpected behavior when a 
batch is split
 Key: KAFKA-8021
 URL: https://issues.apache.org/jira/browse/KAFKA-8021
 Project: Kafka
  Issue Type: Bug
  Components: clients, producer 
Affects Versions: 0.11.0.0
Reporter: Abhishek Mendhekar
Assignee: Abhishek Mendhekar


KafkaProducer.flush() marks the flush in progress and then waits for all 
incomplete batches to be completed (waits on the producer batch futures to 
finish).

The behavior is seen when a batch is split due to MESSAGE_TOO_LARGE exception.

The large batch is split into smaller batches (2 or more) but 
ProducerBatch.split() marks the large batch future as complete before adding 
the new batches in the incomplete list of batches. At this time if the 
KafkaProducer.flush() is called then it'll make a copy of existing incomplete 
list of batches and waits for them to complete while ignoring the large batch 
that was split into smaller batches.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk11 #329

2019-02-28 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-8012; Ensure partitionStates have not been removed before

[wangguoz] KAFKA-7918: Inline generic parameters Pt. III: in-memory window store

--
[...truncated 2.32 MB...]
org.apache.kafka.connect.transforms.FlattenTest > testUnsupportedTypeInMap 
PASSED

org.apache.kafka.connect.transforms.FlattenTest > testNestedStruct STARTED

org.apache.kafka.connect.transforms.FlattenTest > testNestedStruct PASSED

org.apache.kafka.connect.transforms.RegexRouterTest > doesntMatch STARTED

org.apache.kafka.connect.transforms.RegexRouterTest > doesntMatch PASSED

org.apache.kafka.connect.transforms.RegexRouterTest > identity STARTED

org.apache.kafka.connect.transforms.RegexRouterTest > identity PASSED

org.apache.kafka.connect.transforms.RegexRouterTest > addPrefix STARTED

org.apache.kafka.connect.transforms.RegexRouterTest > addPrefix PASSED

org.apache.kafka.connect.transforms.RegexRouterTest > addSuffix STARTED

org.apache.kafka.connect.transforms.RegexRouterTest > addSuffix PASSED

org.apache.kafka.connect.transforms.RegexRouterTest > slice STARTED

org.apache.kafka.connect.transforms.RegexRouterTest > slice PASSED

org.apache.kafka.connect.transforms.RegexRouterTest > staticReplacement STARTED

org.apache.kafka.connect.transforms.RegexRouterTest > staticReplacement PASSED

org.apache.kafka.connect.transforms.TimestampRouterTest > defaultConfiguration 
STARTED

org.apache.kafka.connect.transforms.TimestampRouterTest > defaultConfiguration 
PASSED

org.apache.kafka.connect.transforms.ValueToKeyTest > withSchema STARTED

org.apache.kafka.connect.transforms.ValueToKeyTest > withSchema PASSED

org.apache.kafka.connect.transforms.ValueToKeyTest > schemaless STARTED

org.apache.kafka.connect.transforms.ValueToKeyTest > schemaless PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeDateRecordValueWithSchemaString STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeDateRecordValueWithSchemaString PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessBooleanTrue STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessBooleanTrue PASSED

org.apache.kafka.connect.transforms.CastTest > testConfigInvalidTargetType 
STARTED

org.apache.kafka.connect.transforms.CastTest > testConfigInvalidTargetType 
PASSED

org.apache.kafka.connect.transforms.CastTest > 
testConfigMixWholeAndFieldTransformation STARTED

org.apache.kafka.connect.transforms.CastTest > 
testConfigMixWholeAndFieldTransformation PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessFloat32 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessFloat32 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessFloat64 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessFloat64 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessUnsupportedType STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessUnsupportedType PASSED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordKeySchemaless 
STARTED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordKeySchemaless 
PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaFloat32 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaFloat32 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaFloat64 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaFloat64 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaString STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaString PASSED

org.apache.kafka.connect.transforms.CastTest > testConfigEmpty STARTED

org.apache.kafka.connect.transforms.CastTest > testConfigEmpty PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt16 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt16 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt32 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt32 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt64 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt64 PASSED

org.apache.kafka.connect.transforms.CastTest > castFieldsSchemaless STARTED

org.apache.kafka.connect.transforms.CastTest > castFieldsSchemaless PASSED

org.apache.kafka.connect.transforms.CastTest > testUnsupportedTargetType STARTED

org.apache.kafka.connect.transforms.CastTest > testUnsupportedTargetType PASSED


Build failed in Jenkins: kafka-2.2-jdk8 #39

2019-02-28 Thread Apache Jenkins Server
See 


Changes:

[matthias] MINOR: fix release.py script (#6317)

[jason] HOTFIX: Change header back to http instead of https to path license

[bill] KAFKA-8011: Fix for race condition causing concurrent modification

--
[...truncated 2.69 MB...]
kafka.controller.PartitionStateMachineTest > 
testNonexistentPartitionToNewPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransitionErrorCodeFromCreateStates STARTED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransitionErrorCodeFromCreateStates PASSED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToNonexistentPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToNonexistentPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOfflineTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOfflineTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOfflinePartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOfflinePartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > testUpdatingOfflinePartitionsCount 
STARTED

kafka.controller.PartitionStateMachineTest > testUpdatingOfflinePartitionsCount 
PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidNonexistentPartitionToOnlinePartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidNonexistentPartitionToOnlinePartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidNonexistentPartitionToOfflinePartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidNonexistentPartitionToOfflinePartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOnlineTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOnlineTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransitionZkUtilsExceptionFromCreateStates 
STARTED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransitionZkUtilsExceptionFromCreateStates 
PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidNewPartitionToNonexistentPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidNewPartitionToNonexistentPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidOnlinePartitionToNewPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidOnlinePartitionToNewPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testUpdatingOfflinePartitionsCountDuringTopicDeletion STARTED

kafka.controller.PartitionStateMachineTest > 
testUpdatingOfflinePartitionsCountDuringTopicDeletion PASSED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionErrorCodeFromStateLookup STARTED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionErrorCodeFromStateLookup PASSED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOnlineTransitionForControlledShutdown STARTED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOnlineTransitionForControlledShutdown PASSED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionZkUtilsExceptionFromStateLookup 
STARTED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionZkUtilsExceptionFromStateLookup 
PASSED

kafka.controller.PartitionStateMachineTest > 
testNoOfflinePartitionsChangeForTopicsBeingDeleted STARTED

kafka.controller.PartitionStateMachineTest > 
testNoOfflinePartitionsChangeForTopicsBeingDeleted PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidOnlinePartitionToNonexistentPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidOnlinePartitionToNonexistentPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidOfflinePartitionToNewPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidOfflinePartitionToNewPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransition PASSED

kafka.controller.ControllerFailoverTest > testHandleIllegalStateException 
STARTED

kafka.controller.ControllerFailoverTest > 

Build failed in Jenkins: kafka-1.1-jdk7 #249

2019-02-28 Thread Apache Jenkins Server
See 


Changes:

[bill] Fixing merge conflicts from cherry pick

--
[...truncated 1.94 MB...]

org.apache.kafka.streams.StreamsConfigTest > 
shouldNotOverrideUserConfigRetriesIfExactlyOnceEnabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldNotOverrideUserConfigRetriesIfExactlyOnceEnabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > testGetProducerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > testGetProducerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerMaxInFlightRequestPerConnectionsWhenEosDisabled 
STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerMaxInFlightRequestPerConnectionsWhenEosDisabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowStreamsExceptionIfValueSerdeConfigFails STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowStreamsExceptionIfValueSerdeConfigFails PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDefaultIfConsumerAutoCommitIsOverridden STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDefaultIfConsumerAutoCommitIsOverridden PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowExceptionIfNotAtLestOnceOrExactlyOnce STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowExceptionIfNotAtLestOnceOrExactlyOnce PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingConsumerIsolationLevelIfEosDisabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingConsumerIsolationLevelIfEosDisabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedProducerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedProducerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedPropertiesThatAreNotPartOfRestoreConsumerConfig STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedPropertiesThatAreNotPartOfRestoreConsumerConfig PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultProducerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultProducerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedPropertiesThatAreNotPartOfProducerConfig STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedPropertiesThatAreNotPartOfProducerConfig PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldBeBackwardsCompatibleWithDeprecatedConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldBeBackwardsCompatibleWithDeprecatedConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldBeSupportNonPrefixedConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldBeSupportNonPrefixedConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultConsumerConifgsOnRestoreConsumer STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultConsumerConifgsOnRestoreConsumer PASSED

org.apache.kafka.streams.StreamsConfigTest > shouldAcceptExactlyOnce STARTED

org.apache.kafka.streams.StreamsConfigTest > shouldAcceptExactlyOnce PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSetInternalLeaveGroupOnCloseConfigToFalseInConsumer STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSetInternalLeaveGroupOnCloseConfigToFalseInConsumer PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedPropertiesThatAreNotPartOfConsumerConfig STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedPropertiesThatAreNotPartOfConsumerConfig PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldForwardCustomConfigsWithNoPrefixToAllClients STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldForwardCustomConfigsWithNoPrefixToAllClients PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDefaultIfRestoreConsumerAutoCommitIsOverridden STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDefaultIfRestoreConsumerAutoCommitIsOverridden PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSpecifyCorrectValueSerdeClassOnError STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSpecifyCorrectValueSerdeClassOnError PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSpecifyCorrectValueSerdeClassOnErrorUsingDeprecatedConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 

Build failed in Jenkins: kafka-1.0-jdk7 #263

2019-02-28 Thread Apache Jenkins Server
See 


Changes:

[bill] KAFKA-8011: Fix for race condition causing concurrent modification

--
[...truncated 1.86 MB...]

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
shouldNotAllowNullStateStoreNameWhenConnectingProcessorAndStateStores STARTED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
shouldNotAllowNullStateStoreNameWhenConnectingProcessorAndStateStores PASSED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
shouldAddTimestampExtractorPerSource STARTED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
shouldAddTimestampExtractorPerSource PASSED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testPatternSourceTopic STARTED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testPatternSourceTopic PASSED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
shouldAssociateStateStoreNameWhenStateStoreSupplierIsExternal STARTED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
shouldAssociateStateStoreNameWhenStateStoreSupplierIsExternal PASSED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testAddSinkWithWrongParent STARTED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testAddSinkWithWrongParent PASSED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testPatternMatchesAlreadyProvidedTopicSource STARTED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testPatternMatchesAlreadyProvidedTopicSource PASSED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testAddSinkConnectedWithMultipleParent STARTED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testAddSinkConnectedWithMultipleParent PASSED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testSubscribeTopicNameAndPattern STARTED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testSubscribeTopicNameAndPattern PASSED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
shouldAddPatternSourceWithoutOffsetReset STARTED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
shouldAddPatternSourceWithoutOffsetReset PASSED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
shouldNotAddNullInternalTopic STARTED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
shouldNotAddNullInternalTopic PASSED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testAddProcessorWithWrongParent STARTED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testAddProcessorWithWrongParent PASSED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
shouldNotAllowNullNameWhenAddingProcessor STARTED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
shouldNotAllowNullNameWhenAddingProcessor PASSED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
shouldAddSourcePatternWithOffsetReset STARTED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
shouldAddSourcePatternWithOffsetReset PASSED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testAddStateStore STARTED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testAddStateStore PASSED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testAddSinkConnectedWithParent STARTED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testAddSinkConnectedWithParent PASSED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testAddStateStoreWithNonExistingProcessor STARTED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testAddStateStoreWithNonExistingProcessor PASSED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
shouldAddInternalTopicConfigWithCleanupPolicyDeleteForInternalTopics STARTED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
shouldAddInternalTopicConfigWithCleanupPolicyDeleteForInternalTopics PASSED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
shouldCorrectlyMapStateStoreToInternalTopics STARTED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
shouldCorrectlyMapStateStoreToInternalTopics PASSED

org.apache.kafka.streams.processor.internals.PartitionGroupTest > 
testTimeTracking STARTED

org.apache.kafka.streams.processor.internals.PartitionGroupTest > 
testTimeTracking PASSED

org.apache.kafka.streams.processor.internals.StateConsumerTest > 

Build failed in Jenkins: kafka-trunk-jdk11 #328

2019-02-28 Thread Apache Jenkins Server
See 


Changes:

[jason] HOTFIX: Change header back to http instead of https to path license

[jason] MINOR: Skip quota check when replica is in sync (#6344)

[wangguoz] KAFKA-7912: Support concurrent access in InMemoryKeyValueStore 
(#6336)

[github] KAFKA-8011: Fix for race condition causing concurrent modification

--
[...truncated 2.32 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 

[jira] [Created] (KAFKA-8020) Consider making ThreadCache a time-aware LRU Cache

2019-02-28 Thread Richard Yu (JIRA)
Richard Yu created KAFKA-8020:
-

 Summary: Consider making ThreadCache a time-aware LRU Cache
 Key: KAFKA-8020
 URL: https://issues.apache.org/jira/browse/KAFKA-8020
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Richard Yu


Currently, in Kafka Streams, ThreadCache is used to store 
{{InternalProcessorContext}}s. Typically, these entries apply for only a 
limited time span. For example, in {{CachingWindowStore}}, a window is of fixed 
size. After it expires, it would no longer be queried for, but it potentially 
could stay in the ThreadCache for an unnecessary amount of time if it is not 
evicted (i.e. the number of entries being inserted is few). For better 
allocation of memory, it would be better if we implement a time-aware LRU Cache 
which takes into account the lifespan of an entry and removes it once it has 
expired. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk8 #3426

2019-02-28 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: fix release.py script (#6317)

[jason] HOTFIX: Change header back to http instead of https to path license

[jason] MINOR: Skip quota check when replica is in sync (#6344)

[wangguoz] KAFKA-7912: Support concurrent access in InMemoryKeyValueStore 
(#6336)

[github] KAFKA-8011: Fix for race condition causing concurrent modification

--
[...truncated 2.32 MB...]
org.apache.kafka.connect.transforms.ValueToKeyTest > withSchema PASSED

org.apache.kafka.connect.transforms.ValueToKeyTest > schemaless STARTED

org.apache.kafka.connect.transforms.ValueToKeyTest > schemaless PASSED

org.apache.kafka.connect.transforms.TimestampRouterTest > defaultConfiguration 
STARTED

org.apache.kafka.connect.transforms.TimestampRouterTest > defaultConfiguration 
PASSED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > schemaNameUpdate 
STARTED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > schemaNameUpdate 
PASSED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
schemaNameAndVersionUpdateWithStruct STARTED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
schemaNameAndVersionUpdateWithStruct PASSED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
updateSchemaOfStruct STARTED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
updateSchemaOfStruct PASSED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
schemaNameAndVersionUpdate STARTED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
schemaNameAndVersionUpdate PASSED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > updateSchemaOfNull 
STARTED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > updateSchemaOfNull 
PASSED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > schemaVersionUpdate 
STARTED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > schemaVersionUpdate 
PASSED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
updateSchemaOfNonStruct STARTED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
updateSchemaOfNonStruct PASSED

org.apache.kafka.connect.transforms.ExtractFieldTest > withSchema STARTED

org.apache.kafka.connect.transforms.ExtractFieldTest > withSchema PASSED

org.apache.kafka.connect.transforms.ExtractFieldTest > testNullWithSchema 
STARTED

org.apache.kafka.connect.transforms.ExtractFieldTest > testNullWithSchema PASSED

org.apache.kafka.connect.transforms.ExtractFieldTest > schemaless STARTED

org.apache.kafka.connect.transforms.ExtractFieldTest > schemaless PASSED

org.apache.kafka.connect.transforms.ExtractFieldTest > testNullSchemaless 
STARTED

org.apache.kafka.connect.transforms.ExtractFieldTest > testNullSchemaless PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessFieldConversion STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessFieldConversion PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigInvalidTargetType STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigInvalidTargetType PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessStringToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessStringToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > testKey STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > testKey PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessDateToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessDateToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToString STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToString PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimeToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimeToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToDate STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToDate PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToTime STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToTime PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToUnix STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToUnix PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaUnixToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 

Re: KIP-213 - Scalable/Usable Foreign-Key KTable joins - Rebooted.

2019-02-28 Thread Matthias J. Sax
Adam,

I finally had the time to review the KIP and catch up on the mailing
list discussion. Thanks a lot for putting this together! Great work! Of
course also big thanks to Jan who started the KIP initially.

This is a long email, because I re-read the discussion for multiple
month and reply to many things... I don't think there is a need to reply
to every point I mention. Just want to add my 2 cents to a couple of
points that were discussed.


(0) Overall the design makes sense to me. The API is intuitive and clean
now. The API in the original proposal leaked a lot of implementation
details, what was a major concern to me. I also believe that it's
important to partition the data of the result KTable correctly (the
KScatteredTable does violate this; ie, the "key is useless" as Jan
phrased it), thus the last step seems to be mandatory to me. Also adding
a KScatteredKTable adds a lot of new public API that is basically just
duplicating existing APIs (filter, mapValue, groupBy, toStream, join).
Lastly, I am happy that we don't need to "watermark/header" stuff to fix
the ordering race condition.

(1) About the optimization potential for multiple consecutive join: I
think we could tackle this with the optimization framework we have in
place now.

(2) I was also thinking about left/outer join, and I believe that we
could add a left-join easily (as follow up work; even if I think it's
not a big addition to the current design). However, an outer-join does
not make too much sense because we don't have a key for the result
KTable of "right hand side" records that don't join (ie, the
right-outer-join part cannot be done).

(3) About the "emit on change" vs "emit on update" discussion. I think
this is orthogonal to this KIP and I would stick with "emit on update"
because this is the current behavior of all existing operators. If we
want to change it, we should consider to do this for all operators. I
also believe, even if it does not change the API, it should be backed
with a KIP, because it is a semantics (ie, major) change.



@Jan:

> I have a lengthy track record of loosing those kinda arguments within the 
> streams community and I have no clue why

Because you are a power user, that has different goals in mind. We tend
to optimize the API that it's easy to use for non-power users what is
the majority of people. The KScatteredTable is a hard to grog concept...

> where simplicity isn't really that as users still need to understand it I 
> argue

I disagree here. If we do a good job designing the APIs, user don't need
to understand the nitty-gritty details, and it "just works".


For the performance discussion, ie, which side is "larger": this does
not really matter (number of keys is irrelevant) IHMO. The question is,
which side is _updated_ more often and what is "n" (the factor "n" would
not be relevant for Jan's proposal though). For every left hand side
update, we send 2 messages to the right hand side and get 2 messages
back. For every right hand side update we send n messages to the left
hand side.

I agree with Jan we can't know this though (not the irrelevant "size" of
each side, nor the "n", nor the update rate).





Finally, couple of questions/comments on the KIP (please reply to this
part :)):

 - For the materialized combined-key store, why do we need to disable
caching? And why do we need to flush the store?

 - About resolving order:

(a) for each LHS update, we need to send two records to the RHS (one to
"unsubscribe" the old FK, and on to "subscribe" to the new FK). The KIP
further proposes to send two records back: `null` for the unsubscribe
and a new join "result" for the new FK. This can result in ordering
issues that we want to resolve with the FK lookup in the final step.

> The thing is that in that last join, we have the opportunity to compare the
> current FK in the left table with the incoming PK of the right table. If
> they don't match, we just drop the event, since it must be outdated.

Jan criticized this as "swallowing" updates if they arrive out-of-order
and the delete is not reflected in the result KTable (if I understood
him correctly). I disagree with Jan, and actually think, we should
always avoid the delete on the result KTable to begin with:

If both records arrive in the correct order on the LHS, we would still
produce two output messages downstream. This is intuitive, because we
_know_ that a single update to the LHS table, should result in _one_
update to the result KTable. And we also know, that the update is based
on (ie, correct for) the new FK.

Thus, I am wondering why we would need to send the `null` message back
(from RHS to LHS) in the first place?

Instead, we could encode if the RHS should send something back or not.
This way, an "unsubscribe" message will only update the store for the
CominedKey (ie, delete the corresponding entry) and only the new FK will
trigger a join lookup in the RHS table to compute a "result" that is
sent back. If the LHS input is a 

Build failed in Jenkins: kafka-2.2-jdk8 #38

2019-02-28 Thread Apache Jenkins Server
See 


Changes:

[matthias] MINOR: state.cleanup.delay.ms default is 600,000 ms (10 minutes).

--
[...truncated 2.62 MB...]
kafka.api.SaslOAuthBearerSslEndToEndAuthorizationTest > 
testTwoConsumersWithDifferentSaslCredentials STARTED

kafka.api.SaslOAuthBearerSslEndToEndAuthorizationTest > 
testTwoConsumersWithDifferentSaslCredentials PASSED

kafka.api.SaslOAuthBearerSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe STARTED

kafka.api.SaslOAuthBearerSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe PASSED

kafka.api.SaslOAuthBearerSslEndToEndAuthorizationTest > 
testProduceConsumeWithPrefixedAcls STARTED

kafka.api.SaslOAuthBearerSslEndToEndAuthorizationTest > 
testProduceConsumeWithPrefixedAcls PASSED

kafka.api.SaslOAuthBearerSslEndToEndAuthorizationTest > 
testProduceConsumeViaAssign STARTED

kafka.api.SaslOAuthBearerSslEndToEndAuthorizationTest > 
testProduceConsumeViaAssign PASSED

kafka.api.SaslOAuthBearerSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaAssign STARTED

kafka.api.SaslOAuthBearerSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaAssign PASSED

kafka.api.SaslOAuthBearerSslEndToEndAuthorizationTest > 
testProduceConsumeTopicAutoCreateTopicCreateAcl STARTED

kafka.api.SaslOAuthBearerSslEndToEndAuthorizationTest > 
testProduceConsumeTopicAutoCreateTopicCreateAcl PASSED

kafka.api.SaslOAuthBearerSslEndToEndAuthorizationTest > 
testProduceConsumeWithWildcardAcls STARTED

kafka.api.SaslOAuthBearerSslEndToEndAuthorizationTest > 
testProduceConsumeWithWildcardAcls PASSED

kafka.api.SaslOAuthBearerSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe STARTED

kafka.api.SaslOAuthBearerSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe PASSED

kafka.api.SaslOAuthBearerSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign STARTED
ERROR: Could not install GRADLE_4_8_1_HOME
java.lang.NullPointerException
at 
hudson.plugins.toolenv.ToolEnvBuildWrapper$1.buildEnvVars(ToolEnvBuildWrapper.java:46)
at hudson.model.AbstractBuild.getEnvironment(AbstractBuild.java:881)
at hudson.plugins.git.GitSCM.getParamExpandedRepos(GitSCM.java:484)
at 
hudson.plugins.git.GitSCM.compareRemoteRevisionWithImpl(GitSCM.java:693)
at hudson.plugins.git.GitSCM.compareRemoteRevisionWith(GitSCM.java:658)
at hudson.scm.SCM.compareRemoteRevisionWith(SCM.java:400)
at hudson.scm.SCM.poll(SCM.java:417)
at hudson.model.AbstractProject._poll(AbstractProject.java:1390)
at hudson.model.AbstractProject.poll(AbstractProject.java:1293)
at hudson.triggers.SCMTrigger$Runner.runPolling(SCMTrigger.java:603)
at hudson.triggers.SCMTrigger$Runner.run(SCMTrigger.java:649)
at 
hudson.util.SequentialExecutionQueue$QueueEntry.run(SequentialExecutionQueue.java:119)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

kafka.api.SaslOAuthBearerSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign PASSED

kafka.api.SaslOAuthBearerSslEndToEndAuthorizationTest > testNoGroupAcl STARTED

kafka.api.SaslOAuthBearerSslEndToEndAuthorizationTest > testNoGroupAcl PASSED

kafka.api.SaslOAuthBearerSslEndToEndAuthorizationTest > 
testNoProduceWithDescribeAcl STARTED

kafka.api.SaslOAuthBearerSslEndToEndAuthorizationTest > 
testNoProduceWithDescribeAcl PASSED

kafka.api.SaslOAuthBearerSslEndToEndAuthorizationTest > 
testProduceConsumeViaSubscribe STARTED

kafka.api.SaslOAuthBearerSslEndToEndAuthorizationTest > 
testProduceConsumeViaSubscribe PASSED

kafka.api.SaslOAuthBearerSslEndToEndAuthorizationTest > 
testNoProduceWithoutDescribeAcl STARTED

kafka.api.SaslOAuthBearerSslEndToEndAuthorizationTest > 
testNoProduceWithoutDescribeAcl PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testAcls STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testAcls PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testTwoConsumersWithDifferentSaslCredentials STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testTwoConsumersWithDifferentSaslCredentials PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testProduceConsumeWithPrefixedAcls STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testProduceConsumeWithPrefixedAcls PASSED


Build failed in Jenkins: kafka-trunk-jdk8 #3425

2019-02-28 Thread Apache Jenkins Server
See 


Changes:

[mjsax] MINOR: state.cleanup.delay.ms default is 600,000 ms (10 minutes).

[jason] MINOR: Improve logging for alter log dirs (#6302)

[github] MINOR: Remove types from caching stores (#6331)

--
[...truncated 4.63 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 

Build failed in Jenkins: kafka-trunk-jdk11 #327

2019-02-28 Thread Apache Jenkins Server
See 


Changes:

[mjsax] MINOR: state.cleanup.delay.ms default is 600,000 ms (10 minutes).

[jason] MINOR: Improve logging for alter log dirs (#6302)

[github] MINOR: Remove types from caching stores (#6331)

[github] MINOR: fix release.py script (#6317)

--
[...truncated 2.32 MB...]
org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToString STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToString PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimeToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimeToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToDate STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToDate PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToTime STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToTime PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToUnix STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToUnix PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaUnixToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaUnixToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessIdentity STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessIdentity PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaFieldConversion STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaFieldConversion PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaDateToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaDateToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaIdentity STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaIdentity PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToDate STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToDate PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToTime STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToTime PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToUnix STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToUnix PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigMissingFormat STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigMissingFormat PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigNoTargetType STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigNoTargetType PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaStringToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaStringToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimeToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimeToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessUnixToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessUnixToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToString STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToString PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigInvalidFormat STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigInvalidFormat PASSED

org.apache.kafka.connect.transforms.HoistFieldTest > withSchema STARTED

org.apache.kafka.connect.transforms.HoistFieldTest > withSchema PASSED

org.apache.kafka.connect.transforms.HoistFieldTest > schemaless STARTED

org.apache.kafka.connect.transforms.HoistFieldTest > schemaless PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeDateRecordValueWithSchemaString STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeDateRecordValueWithSchemaString PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessBooleanTrue STARTED


Re: [DISCUSS] KIP-427: Add AtMinIsr topic partition category (new metric & TopicCommand option)

2019-02-28 Thread Dong Lin
Hey Kevin, Harsha,

Thanks for the explanation! Now I agree with the motivation and the
addition of this metric.

Regards,
Dong

On Thu, Feb 28, 2019 at 12:30 PM Kevin Lu  wrote:

> Hi Harsha, thanks for the context. It's very useful to see how this
> AtMinIsr metric will function under different environments.
>
> Hi Dong,
>
> Thanks for the response. Let me try to change your mind. :)
>
> I think it's important to relate this metric with the different
> scenarios/configurations that people use, so I have modified the KIP to
> include material from our discussions.
>
> Let's first define UnderReplicated and AtMinIsr that applies in all
> scenarios:
>
> *UnderReplicated:* A partition in which the isr_set_size is not equal to
> the replica_set_size (isr_set_size can be bigger or smaller than
> replica_replication_factor)
> *AtMinIsr: *A partition in which the isr_set_size is equal to the
> min_isr_size, which also means 1 more drop in isr_set_size will lead to at
> least producer (acks=ALL) failure
>
> We can see that there is some overlap between the two definitions,
> especially in the examples you have provided. In these cases, the AtMinIsr
> metric is the exact same as the UnderReplicated metric. However, here are a
> few scenarios in which AtMinIsr provides an improvement over
> UnderReplicated:
>
> *(1) Repartitioning*
>
> When an admin triggers a repartition, the ISR set is first expanded from
> [old_set] to [old_set + new_set], and then reduced to just the [new_set].
> In this case, UnderReplicated will be non-zero even when the ISR set is
> [old_set + new_set]. AtMinIsr will not be non-zero during [old_set +
> new_set] step unless something goes wrong during repartitioning and
> replicas are failing to fetch (reducing the isr_set_size to min_isr_size),
> but we want to know if this happens.
>
> *(2) min.insync.replicas = 1*
>
> The default value for this configuration is 1, and users can change this to
> provide higher durability guarantees. In the default scenario where
> min.insync.replicas = 1 and replication-factor = 3, the AtMinIsr metric
> will be non-zero when isr_set_size = 1, which tells us that 1 more drop in
> this set will lead to a completely unavailable partition. This is very
> powerful for users that have min.insync.replicas = 1 and replication-factor
> > 2.
>
> *(3) replication-factor - min.insync.replicas > 1*
>
> Kafka is built to be fault-tolerant, so we ideally want to be able to
> tolerate more than 1 failure which means we want the difference between
> replication-factor and min.insync.replicas to be > 1. If it is equal to 1,
> then we can only tolerate 1 failure otherwise acks=ALL producers will fail.
>
> We generally want isr_set_size to equal replica_replication_factor to have
> the best guarantees, but this is not always possible for all Kafka users
> depending on their environment and resources. In some situations, we can
> allow the isr_set_size to be reduced, especially if we can tolerate more
> than 1 failure (replication-factor - min.insync.replicas > 1). The only
> requirement is that the isr_set_size must be at least min_isr_size
> otherwise acks=ALL producers will fail.
>
> One example is if we have a cluster with massive load and we do not want to
> trigger a repartition to make isr_set_size = replica_replication_factor
> unless absolutely necessary as repartitioning introduces additional load
> which can impact clients. Maybe we also expect the failed broker to be
> restored soon so we don't want to do anything unless absolutely necessary.
> In these scenarios, the AtMinIsr metric will tell us when we absolutely
> need to *consider* repartitioning or some other action to restore the
> health of the cluster (false negative is still possible but it tells us
> that we could not tolerate any more failure at the time it was non-zero if
> we do not want acks=ALL producers to fail).
>
> In our Kafka environment, we do not even have alerts configured for
> UnderReplicated as it is too noisy for us and we can tolerate some
> failures. We run a periodic job to perform the same functionality as
> AtMinIsr, but it would be better to have it as a metric so we can configure
> an alert on it.
>
>
> The usage of the AtMinIsr metric is the same as UnderReplicated. If the
> user has alerts configured on UnderReplicated and they are using
> min_isr_size = replica_set_size - 1, then AtMinIsr will be the same as
> UnderReplicated. In the other scenarios listed above, AtMinIsr can be a
> more severe. If UnderReplicated is not too noisy for the user, then they
> can keep the UnderReplicated alert and set an AtMinIsr alert with higher
> severity.
>
>
> The way I see it is that the AtMinIsr metric is as good as the
> UnderReplicated metric, but better in some scenarios such as the ones
> listed above.
>
> Regards,
> Kevin
>
> On Thu, Feb 28, 2019 at 10:21 AM Harsha  wrote:
>
> > Hi Dong,
> >  I think AtMinIsr is still valuable to indicate cluster is at
> > a critical state 

Build failed in Jenkins: kafka-1.1-jdk7 #248

2019-02-28 Thread Apache Jenkins Server
See 


Changes:

[matthias] MINOR: disable Streams system test for broker upgrade/downgrade 
(#6341)

[matthias] MINOR: state.cleanup.delay.ms default is 600,000 ms (10 minutes).

--
[...truncated 428.84 KB...]

kafka.utils.CoreUtilsTest > testUrlSafeBase64EncodeUUID STARTED

kafka.utils.CoreUtilsTest > testUrlSafeBase64EncodeUUID PASSED

kafka.utils.CoreUtilsTest > testCsvMap STARTED

kafka.utils.CoreUtilsTest > testCsvMap PASSED

kafka.utils.CoreUtilsTest > testInLock STARTED

kafka.utils.CoreUtilsTest > testInLock PASSED

kafka.utils.CoreUtilsTest > testTryAll STARTED

kafka.utils.CoreUtilsTest > testTryAll PASSED

kafka.utils.CoreUtilsTest > testSwallow STARTED

kafka.utils.CoreUtilsTest > testSwallow PASSED

kafka.security.token.delegation.DelegationTokenManagerTest > 
testPeriodicTokenExpiry STARTED

kafka.security.token.delegation.DelegationTokenManagerTest > 
testPeriodicTokenExpiry PASSED

kafka.security.token.delegation.DelegationTokenManagerTest > 
testTokenRequestsWithDelegationTokenDisabled STARTED

kafka.security.token.delegation.DelegationTokenManagerTest > 
testTokenRequestsWithDelegationTokenDisabled PASSED

kafka.security.token.delegation.DelegationTokenManagerTest > testDescribeToken 
STARTED

kafka.security.token.delegation.DelegationTokenManagerTest > testDescribeToken 
PASSED

kafka.security.token.delegation.DelegationTokenManagerTest > testCreateToken 
STARTED

kafka.security.token.delegation.DelegationTokenManagerTest > testCreateToken 
PASSED

kafka.security.token.delegation.DelegationTokenManagerTest > testExpireToken 
STARTED

kafka.security.token.delegation.DelegationTokenManagerTest > testExpireToken 
PASSED

kafka.security.token.delegation.DelegationTokenManagerTest > testRenewToken 
STARTED

kafka.security.token.delegation.DelegationTokenManagerTest > testRenewToken 
PASSED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled STARTED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled PASSED

kafka.security.auth.ZkAuthorizationTest > testZkUtils STARTED

kafka.security.auth.ZkAuthorizationTest > testZkUtils PASSED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration STARTED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testConsumerOffsetPathAcls STARTED

kafka.security.auth.ZkAuthorizationTest > testConsumerOffsetPathAcls PASSED

kafka.security.auth.ZkAuthorizationTest > testZkMigration STARTED

kafka.security.auth.ZkAuthorizationTest > testZkMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testChroot STARTED

kafka.security.auth.ZkAuthorizationTest > testChroot PASSED

kafka.security.auth.ZkAuthorizationTest > testDelete STARTED

kafka.security.auth.ZkAuthorizationTest > testDelete PASSED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive STARTED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive PASSED

kafka.security.auth.ResourceTypeTest > testJavaConversions STARTED

kafka.security.auth.ResourceTypeTest > testJavaConversions PASSED

kafka.security.auth.ResourceTypeTest > testFromString STARTED

kafka.security.auth.ResourceTypeTest > testFromString PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAllowAllAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAllowAllAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testLocalConcurrentModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testLocalConcurrentModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyDeletionOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyDeletionOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFound STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFound PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclInheritance STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAclInheritance PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 

Build failed in Jenkins: kafka-2.1-jdk8 #137

2019-02-28 Thread Apache Jenkins Server
See 


Changes:

[matthias] MINOR: state.cleanup.delay.ms default is 600,000 ms (10 minutes).

--
[...truncated 1.92 MB...]

org.apache.kafka.trogdor.coordinator.CoordinatorTest > testTaskCancellation 
STARTED

org.apache.kafka.trogdor.coordinator.CoordinatorTest > testTaskCancellation 
PASSED

org.apache.kafka.trogdor.coordinator.CoordinatorTest > testCreateTask STARTED

org.apache.kafka.trogdor.coordinator.CoordinatorTest > testCreateTask PASSED

org.apache.kafka.trogdor.coordinator.CoordinatorTest > testTasksRequest STARTED

org.apache.kafka.trogdor.coordinator.CoordinatorTest > testTasksRequest PASSED

org.apache.kafka.trogdor.coordinator.CoordinatorTest > testCoordinatorStatus 
STARTED

org.apache.kafka.trogdor.coordinator.CoordinatorTest > testCoordinatorStatus 
PASSED

org.apache.kafka.trogdor.agent.AgentTest > testAgentCreateWorkers STARTED

org.apache.kafka.trogdor.agent.AgentTest > testAgentCreateWorkers PASSED

org.apache.kafka.trogdor.agent.AgentTest > testAgentGetStatus STARTED

org.apache.kafka.trogdor.agent.AgentTest > testAgentGetStatus PASSED

org.apache.kafka.trogdor.agent.AgentTest > testAgentStartShutdown STARTED

org.apache.kafka.trogdor.agent.AgentTest > testAgentStartShutdown PASSED

org.apache.kafka.trogdor.agent.AgentTest > testAgentProgrammaticShutdown STARTED

org.apache.kafka.trogdor.agent.AgentTest > testAgentProgrammaticShutdown PASSED

org.apache.kafka.trogdor.agent.AgentTest > testDestroyWorkers STARTED

org.apache.kafka.trogdor.agent.AgentTest > testDestroyWorkers PASSED

org.apache.kafka.trogdor.agent.AgentTest > testKiboshFaults STARTED

org.apache.kafka.trogdor.agent.AgentTest > testKiboshFaults PASSED

org.apache.kafka.trogdor.agent.AgentTest > testWorkerCompletions STARTED

org.apache.kafka.trogdor.agent.AgentTest > testWorkerCompletions PASSED

org.apache.kafka.trogdor.agent.AgentTest > testAgentFinishesTasks STARTED

org.apache.kafka.trogdor.agent.AgentTest > testAgentFinishesTasks PASSED

org.apache.kafka.trogdor.task.TaskSpecTest > testTaskSpecSerialization STARTED

org.apache.kafka.trogdor.task.TaskSpecTest > testTaskSpecSerialization PASSED

org.apache.kafka.trogdor.workload.PayloadGeneratorTest > 
testConstantPayloadGenerator STARTED

org.apache.kafka.trogdor.workload.PayloadGeneratorTest > 
testConstantPayloadGenerator PASSED

org.apache.kafka.trogdor.workload.PayloadGeneratorTest > 
testSequentialPayloadGenerator STARTED

org.apache.kafka.trogdor.workload.PayloadGeneratorTest > 
testSequentialPayloadGenerator PASSED

org.apache.kafka.trogdor.workload.PayloadGeneratorTest > 
testNullPayloadGenerator STARTED

org.apache.kafka.trogdor.workload.PayloadGeneratorTest > 
testNullPayloadGenerator PASSED

org.apache.kafka.trogdor.workload.PayloadGeneratorTest > 
testUniformRandomPayloadGenerator STARTED

org.apache.kafka.trogdor.workload.PayloadGeneratorTest > 
testUniformRandomPayloadGenerator PASSED

org.apache.kafka.trogdor.workload.PayloadGeneratorTest > testPayloadIterator 
STARTED

org.apache.kafka.trogdor.workload.PayloadGeneratorTest > testPayloadIterator 
PASSED

org.apache.kafka.trogdor.workload.PayloadGeneratorTest > 
testUniformRandomPayloadGeneratorPaddingBytes STARTED

org.apache.kafka.trogdor.workload.PayloadGeneratorTest > 
testUniformRandomPayloadGeneratorPaddingBytes PASSED

org.apache.kafka.trogdor.workload.HistogramTest > testHistogramPercentiles 
STARTED

org.apache.kafka.trogdor.workload.HistogramTest > testHistogramPercentiles 
PASSED

org.apache.kafka.trogdor.workload.HistogramTest > testHistogramSamples STARTED

org.apache.kafka.trogdor.workload.HistogramTest > testHistogramSamples PASSED

org.apache.kafka.trogdor.workload.HistogramTest > testHistogramAverage STARTED

org.apache.kafka.trogdor.workload.HistogramTest > testHistogramAverage PASSED

org.apache.kafka.trogdor.workload.ThrottleTest > testThrottle STARTED

org.apache.kafka.trogdor.workload.ThrottleTest > testThrottle PASSED

org.apache.kafka.trogdor.workload.TopicsSpecTest > testPartitionNumbers STARTED

org.apache.kafka.trogdor.workload.TopicsSpecTest > testPartitionNumbers PASSED

org.apache.kafka.trogdor.workload.TopicsSpecTest > testMaterialize STARTED

org.apache.kafka.trogdor.workload.TopicsSpecTest > testMaterialize PASSED

org.apache.kafka.trogdor.rest.RestExceptionMapperTest > 
testToResponseInvalidRequestException STARTED

org.apache.kafka.trogdor.rest.RestExceptionMapperTest > 
testToResponseInvalidRequestException PASSED

org.apache.kafka.trogdor.rest.RestExceptionMapperTest > 
testToResponseJsonMappingException STARTED

org.apache.kafka.trogdor.rest.RestExceptionMapperTest > 
testToResponseJsonMappingException PASSED

org.apache.kafka.trogdor.rest.RestExceptionMapperTest > testToResponseNotFound 
STARTED

org.apache.kafka.trogdor.rest.RestExceptionMapperTest > testToResponseNotFound 
PASSED

org.apache.kafka.trogdor.rest.RestExceptionMapperTest > 

Re: [DISCUSS] KIP-427: Add AtMinIsr topic partition category (new metric & TopicCommand option)

2019-02-28 Thread Kevin Lu
Hi Harsha, thanks for the context. It's very useful to see how this
AtMinIsr metric will function under different environments.

Hi Dong,

Thanks for the response. Let me try to change your mind. :)

I think it's important to relate this metric with the different
scenarios/configurations that people use, so I have modified the KIP to
include material from our discussions.

Let's first define UnderReplicated and AtMinIsr that applies in all
scenarios:

*UnderReplicated:* A partition in which the isr_set_size is not equal to
the replica_set_size (isr_set_size can be bigger or smaller than
replica_replication_factor)
*AtMinIsr: *A partition in which the isr_set_size is equal to the
min_isr_size, which also means 1 more drop in isr_set_size will lead to at
least producer (acks=ALL) failure

We can see that there is some overlap between the two definitions,
especially in the examples you have provided. In these cases, the AtMinIsr
metric is the exact same as the UnderReplicated metric. However, here are a
few scenarios in which AtMinIsr provides an improvement over
UnderReplicated:

*(1) Repartitioning*

When an admin triggers a repartition, the ISR set is first expanded from
[old_set] to [old_set + new_set], and then reduced to just the [new_set].
In this case, UnderReplicated will be non-zero even when the ISR set is
[old_set + new_set]. AtMinIsr will not be non-zero during [old_set +
new_set] step unless something goes wrong during repartitioning and
replicas are failing to fetch (reducing the isr_set_size to min_isr_size),
but we want to know if this happens.

*(2) min.insync.replicas = 1*

The default value for this configuration is 1, and users can change this to
provide higher durability guarantees. In the default scenario where
min.insync.replicas = 1 and replication-factor = 3, the AtMinIsr metric
will be non-zero when isr_set_size = 1, which tells us that 1 more drop in
this set will lead to a completely unavailable partition. This is very
powerful for users that have min.insync.replicas = 1 and replication-factor
> 2.

*(3) replication-factor - min.insync.replicas > 1*

Kafka is built to be fault-tolerant, so we ideally want to be able to
tolerate more than 1 failure which means we want the difference between
replication-factor and min.insync.replicas to be > 1. If it is equal to 1,
then we can only tolerate 1 failure otherwise acks=ALL producers will fail.

We generally want isr_set_size to equal replica_replication_factor to have
the best guarantees, but this is not always possible for all Kafka users
depending on their environment and resources. In some situations, we can
allow the isr_set_size to be reduced, especially if we can tolerate more
than 1 failure (replication-factor - min.insync.replicas > 1). The only
requirement is that the isr_set_size must be at least min_isr_size
otherwise acks=ALL producers will fail.

One example is if we have a cluster with massive load and we do not want to
trigger a repartition to make isr_set_size = replica_replication_factor
unless absolutely necessary as repartitioning introduces additional load
which can impact clients. Maybe we also expect the failed broker to be
restored soon so we don't want to do anything unless absolutely necessary.
In these scenarios, the AtMinIsr metric will tell us when we absolutely
need to *consider* repartitioning or some other action to restore the
health of the cluster (false negative is still possible but it tells us
that we could not tolerate any more failure at the time it was non-zero if
we do not want acks=ALL producers to fail).

In our Kafka environment, we do not even have alerts configured for
UnderReplicated as it is too noisy for us and we can tolerate some
failures. We run a periodic job to perform the same functionality as
AtMinIsr, but it would be better to have it as a metric so we can configure
an alert on it.


The usage of the AtMinIsr metric is the same as UnderReplicated. If the
user has alerts configured on UnderReplicated and they are using
min_isr_size = replica_set_size - 1, then AtMinIsr will be the same as
UnderReplicated. In the other scenarios listed above, AtMinIsr can be a
more severe. If UnderReplicated is not too noisy for the user, then they
can keep the UnderReplicated alert and set an AtMinIsr alert with higher
severity.


The way I see it is that the AtMinIsr metric is as good as the
UnderReplicated metric, but better in some scenarios such as the ones
listed above.

Regards,
Kevin

On Thu, Feb 28, 2019 at 10:21 AM Harsha  wrote:

> Hi Dong,
>  I think AtMinIsr is still valuable to indicate cluster is at
> a critical state and something needs to be done asap to restore.
> To your example
> " let's say min_isr = 1 and replica_set_size = 3, it is
> > still possible that planned maintenance (e.g. one broker restart +
> > partition reassignment) can cause isr size drop to 1. Since AtMinIsr can
> > also cause fault positive (i.e. the fact that AtMinIsr > 0 does not
> > necessarily 

Build failed in Jenkins: kafka-trunk-jdk8 #3424

2019-02-28 Thread Apache Jenkins Server
See 


Changes:

[mjsax] MINOR: disable Streams system test for broker upgrade/downgrade (#6341)

--
[...truncated 4.62 MB...]

org.apache.kafka.trogdor.coordinator.CoordinatorTest > testTasksRequestMatches 
PASSED

org.apache.kafka.trogdor.coordinator.CoordinatorTest > 
testWorkersExitingAtDifferentTimes STARTED

org.apache.kafka.trogdor.coordinator.CoordinatorTest > 
testWorkersExitingAtDifferentTimes PASSED

org.apache.kafka.trogdor.coordinator.CoordinatorTest > 
testAgentFailureAndTaskExpiry STARTED

org.apache.kafka.trogdor.coordinator.CoordinatorTest > 
testAgentFailureAndTaskExpiry PASSED

org.apache.kafka.trogdor.coordinator.CoordinatorTest > testTaskDistribution 
STARTED

org.apache.kafka.trogdor.coordinator.CoordinatorTest > testTaskDistribution 
PASSED

org.apache.kafka.trogdor.coordinator.CoordinatorTest > 
testTaskRequestWithOldStartMsGetsUpdated STARTED

org.apache.kafka.trogdor.coordinator.CoordinatorTest > 
testTaskRequestWithOldStartMsGetsUpdated PASSED

org.apache.kafka.trogdor.coordinator.CoordinatorTest > 
testNetworkPartitionFault STARTED

org.apache.kafka.trogdor.coordinator.CoordinatorTest > 
testNetworkPartitionFault PASSED

org.apache.kafka.trogdor.coordinator.CoordinatorTest > 
testTaskRequestWithFutureStartMsDoesNotGetRun STARTED

org.apache.kafka.trogdor.coordinator.CoordinatorTest > 
testTaskRequestWithFutureStartMsDoesNotGetRun PASSED

org.apache.kafka.trogdor.coordinator.CoordinatorTest > testTaskCancellation 
STARTED

org.apache.kafka.trogdor.coordinator.CoordinatorTest > testTaskCancellation 
PASSED

org.apache.kafka.trogdor.coordinator.CoordinatorTest > testCreateTask STARTED

org.apache.kafka.trogdor.coordinator.CoordinatorTest > testCreateTask PASSED

org.apache.kafka.trogdor.coordinator.CoordinatorTest > testTasksRequest STARTED

org.apache.kafka.trogdor.coordinator.CoordinatorTest > testTasksRequest PASSED

org.apache.kafka.trogdor.coordinator.CoordinatorTest > testCoordinatorStatus 
STARTED

org.apache.kafka.trogdor.coordinator.CoordinatorTest > testCoordinatorStatus 
PASSED

org.apache.kafka.trogdor.coordinator.CoordinatorTest > testCoordinatorUptime 
STARTED

org.apache.kafka.trogdor.coordinator.CoordinatorTest > testCoordinatorUptime 
PASSED

org.apache.kafka.trogdor.coordinator.CoordinatorClientTest > 
testPrettyPrintTaskInfo STARTED

org.apache.kafka.trogdor.coordinator.CoordinatorClientTest > 
testPrettyPrintTaskInfo PASSED

> Task :streams:streams-scala:test

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWordsMaterialized 
STARTED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWordsMaterialized 
PASSED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWordsJava STARTED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWordsJava PASSED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWords STARTED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWords PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaJoin STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaJoin PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaSimple STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaSimple PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaAggregate STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaAggregate PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaTransform STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaTransform PASSED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionWithNamedRepartitionTopic STARTED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionWithNamedRepartitionTopic PASSED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionJava STARTED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionJava PASSED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegion STARTED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegion PASSED

org.apache.kafka.streams.scala.kstream.KStreamTest > filter a KStream should 
filter records satisfying the predicate STARTED

org.apache.kafka.streams.scala.kstream.KStreamTest > filter a KStream should 
filter records satisfying the predicate PASSED


[DISCUSS] KIP-429 : Smooth Auto-Scaling for Kafka Streams

2019-02-28 Thread Boyang Chen
Hey community friends,

I'm gladly inviting you to have a look at the proposal to add incremental 
rebalancing to Kafka Streams, A.K.A auto-scaling support.

https://cwiki.apache.org/confluence/display/KAFKA/KIP-429%3A+Smooth+Auto-Scaling+for+Kafka+Streams

Special thanks to Guozhang for giving great guidances and important feedbacks 
while making this KIP!

Best,
Boyang


Re: [DISCUSSION] KIP-422: Add support for user/client configuration in the Kafka Admin Client

2019-02-28 Thread Yaodong Yang
Hello folks,

Please share your comments for this KIP 

Thanks!
Yaodong

On Tue, Feb 26, 2019 at 6:53 PM Yaodong Yang 
wrote:

> Hello Colin,
>
> There is a POC PR for this KIP, and it contains most changes we are
> proposing now.
>
> Best,
> Yaodong
>
> On Tue, Feb 26, 2019 at 6:51 PM Yaodong Yang 
> wrote:
>
>> Hello Colin,
>>
>> CIL,
>>
>> Thanks!
>> Yaodong
>>
>>
>> On Tue, Feb 26, 2019 at 9:59 AM Colin McCabe  wrote:
>>
>>> Hi Yaodong,
>>>
>>> I don't understand how the proposed API would be used.  It talks about
>>> adding a ConfigResource type for clients and users, but doesn't explain
>>> what can be done with these.
>>>
>>
>> Sorry for the confusion. I just updated the KIP, and hopefully it will
>> make it easier for you and other people. Looking forward to your feedback!
>>
>>
>>> In the compatibility section (?) it says "We only add a new way to
>>> configure the quotas" which suggests that quotas are involved somehow  What
>>> relationship does this have with KIP-257?
>>>
>>
>> Let me give you more context, feel free to correct me if I'm wrong 
>>
>> 1. Originally we hit an issue that we can not config client quota through
>> AdminClient. The only way available for us is directly talk to ZK and
>> manage quota directly.
>>
>> 2. As our client service may not in the same DC as ZooKeeper, there could
>> be some cross DC communication which is less desirable.
>>
>> 3. We deicide to add the quota configuration feature in the AdminClient,
>> which will perfectly solve this issue for us.
>>
>> 4. In addition, we realized that this change can also serve as a way to
>> config other users or clients configuration in Zookeeper. For instance, if
>> we have a new client configuration introduced in the future and they need
>> to be in the Zookeeper as well, we can mange it through the same API.
>> Therefore, this KIP is renamed to manage users/clients configurations.
>> Quota management is one use case for this configuration management.
>>
>> 5. KIP-257 is also compatible with the current KIP. For instance, if user
>> want to update a quota for a metric, the client side need to parse it, and
>> eventually pass in a user or client config to AdminClient. AdminClient will
>> make sure such configuration changes are applied in the Zookeeper.
>>
>>
>>> best,
>>> Colin
>>>
>>>
>>> On Fri, Feb 22, 2019, at 15:11, Yaodong Yang wrote:
>>> > Hi Colin,
>>> >
>>> > CIL,
>>> >
>>> > Thanks!
>>> > Yaodong
>>> >
>>> >
>>> > On Fri, Feb 22, 2019 at 10:56 AM Colin McCabe 
>>> wrote:
>>> >
>>> > > Hi Yaodong,
>>> > >
>>> > > KIP-422 says that it would be good if "applications [could] leverage
>>> the
>>> > > unified KafkaAdminClient to manage their user/client configurations,
>>> > > instead of the direct dependency on Zookeeper."  But the KIP doesn't
>>> talk
>>> > > about any changes to KafkaAdminClient.  Instead, the only changes
>>> proposed
>>> > > are to AdminZKClient.  But  that is an internal class-- we don't
>>> need a KIP
>>> > > to change it, and it's not a public API that users can use.
>>> > >
>>> >
>>> > Sorry for the confusion in the KIP. Actually there is no change to
>>> > AdminZKClient needed for this KIP, we just leverage them to configure
>>> the
>>> > properties in the ZK. You can find the details from this PR
>>> > https://github.com/apache/kafka/pull/6189
>>> >
>>> > As you can see from the PR, we need the client side and server process
>>> > changes, so I feel like we still need the KIP for this change.
>>> >
>>> >
>>> > > I realize that the naming might be a bit confusing, but
>>> > > kafka.zk.AdminZKClient and kafka.admin.AdminClient are internal
>>> classes.
>>> > > As the JavaDoc says, kafka.admin.AdminClient is deprecated as well.
>>> The
>>> > > public class that we would be adding new methods to is
>>> > > org.apache.kafka.clients.admin.AdminClient.
>>> > >
>>> >
>>> > I agree. Thanks for pointing this out!
>>> >
>>> >
>>> > > best,
>>> > > Colin
>>> > >
>>> > > On Tue, Feb 19, 2019, at 15:21, Yaodong Yang wrote:
>>> > > > Hello Jun, Viktor, Snoke and Stan,
>>> > > >
>>> > > > Thanks for taking time to look at this KIP-422! For some reason,
>>> this
>>> > > email
>>> > > > was put in my spam folder. Sorry about that.
>>> > > >
>>> > > > Jun is right, the main motivation for this KIP-422 is to allow
>>> users to
>>> > > > config user/clientId quota through AdminClient. In addition, this
>>> KIP-422
>>> > > > also allows users to set or update any config related to a user or
>>> > > clientId
>>> > > > entity if needed in the future.
>>> > > >
>>> > > > For the KIP-257, I agree with Jun that we should add support for
>>> it. I
>>> > > will
>>> > > > look at the current implementation and update the KIP-422 with new
>>> > > change.
>>> > > >
>>> > > > I will ping this thread once I updated the KIP.
>>> > > >
>>> > > > Thanks again!
>>> > > > Yaodong
>>> > > >
>>> > > > On Fri, Feb 15, 2019 at 1:28 AM Viktor Somogyi-Vass <
>>> > > viktorsomo...@gmail.com>
>>> > > > wrote:
>>> > > >
>>> > > > > Hi 

Re: [DISCUSS] KIP-427: Add AtMinIsr topic partition category (new metric & TopicCommand option)

2019-02-28 Thread Harsha
Hi Dong,
 I think AtMinIsr is still valuable to indicate cluster is at a 
critical state and something needs to be done asap to restore.
To your example 
" let's say min_isr = 1 and replica_set_size = 3, it is
> still possible that planned maintenance (e.g. one broker restart +
> partition reassignment) can cause isr size drop to 1. Since AtMinIsr can
> also cause fault positive (i.e. the fact that AtMinIsr > 0 does not
> necessarily need attention from user), "

One broker restart shouldn't cause ISR to drop to 1 from 3 unless 2 partitions 
are co-located on the same broker.
This is still a valuable indicator to the admins that the partition assignment 
needs to be moved.

In our case, we run 4 replicas for critical topics with min.isr = 2 . URPs are 
not really good indicator to take immediate action if one of the replicas is 
down. If 2 replicas are down and we are at 2 alive replicas this is stop 
everything to restore the cluster to a good state.

Thanks,
Harsha






On Wed, Feb 27, 2019, at 11:17 PM, Dong Lin wrote:
> Hey Kevin,
> 
> Thanks for the update.
> 
> The KIP suggests that AtMinIsr is better than UnderReplicatedPartition as
> indicator for alerting. However, in most case where min_isr =
> replica_set_size - 1, these two metrics are exactly the same, where planned
> maintenance can easily cause positive AtMinIsr value. In the other
> scenario, for example let's say min_isr = 1 and replica_set_size = 3, it is
> still possible that planned maintenance (e.g. one broker restart +
> partition reassignment) can cause isr size drop to 1. Since AtMinIsr can
> also cause fault positive (i.e. the fact that AtMinIsr > 0 does not
> necessarily need attention from user), I am not sure it is worth to add
> this metric.
> 
> In the Usage section, it is mentioned that user needs to manually check
> whether there is ongoing maintenance after AtMinIsr is triggered. Could you
> explain how is this different from the current way where we use
> UnderReplicatedPartition to trigger alert? More specifically, can we just
> replace AtMinIsr with UnderReplicatedPartition in the Usage section?
> 
> Thanks,
> Dong
> 
> 
> On Tue, Feb 26, 2019 at 6:49 PM Kevin Lu  wrote:
> 
> > Hi Dong!
> >
> > Thanks for the feedback!
> >
> > You bring up a good point in that the AtMinIsr metric cannot be used to
> > identify failure in the mentioned scenarios. I admit the motivation section
> > placed too much emphasis on "identifying failure".
> >
> > I have modified the KIP to reflect the implementation as the AtMinIsr
> > metric is intended to serve as a warning as one more failure to a partition
> > AtMinIsr will cause producers with acks=ALL configured to fail. It has an
> > additional benefit when minIsr=1 as it will warn us that the entire
> > partition is at risk of going offline, but that is more of a side effect
> > that only applies in that scenario (minIsr=1).
> >
> > Regards,
> > Kevin
> >
> > On Tue, Feb 26, 2019 at 5:11 PM Dong Lin  wrote:
> >
> > > Hey Kevin,
> > >
> > > Thanks for the proposal!
> > >
> > > It seems that the proposed implementation does not match the motivation.
> > > The motivation suggests that the operator wants to tell the planned
> > > maintenance (e.g. broker restart) from unplanned failure (e.g. network
> > > failure). But the use of the metric AtMinIsr does not really
> > differentiate
> > > between these causes of the reduced number of ISR. For example, an
> > > unplanned failure can cause ISR to drop from 3 to 2 but it can still be
> > > higher than the minIsr (say 1). And a planned maintenance can cause ISR
> > to
> > > drop from 3 to 2, which trigger the AtMinIsr metric if minIsr=2. Can you
> > > update the design doc to fix or explain this issue?
> > >
> > > Thanks,
> > > Dong
> > >
> > > On Tue, Feb 12, 2019 at 9:02 AM Kevin Lu  wrote:
> > >
> > > > Hi All,
> > > >
> > > > Getting the discussion thread started for KIP-427 in case anyone is
> > free
> > > > right now.
> > > >
> > > > I’d like to propose a new category of topic partitions *AtMinIsr* which
> > > are
> > > > partitions that only have the minimum number of in sync replicas left
> > in
> > > > the ISR set (as configured by min.insync.replicas).
> > > >
> > > > This would add two new metrics *ReplicaManager.AtMinIsrPartitionCount
> > *&
> > > > *Partition.AtMinIsr*, and a new TopicCommand option*
> > > > --at-min-isr-partitions* to help in monitoring and alerting.
> > > >
> > > > KIP link: KIP-427: Add AtMinIsr topic partition category (new metric &
> > > > TopicCommand option)
> > > > <
> > > >
> > >
> > https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=103089398
> > > > >
> > > >
> > > > Please take a look and let me know what you think.
> > > >
> > > > Regards,
> > > > Kevin
> > > >
> > >
> >
>


Build failed in Jenkins: kafka-2.0-jdk8 #232

2019-02-28 Thread Apache Jenkins Server
See 


Changes:

[matthias] MINOR: disable Streams system test for broker upgrade/downgrade 
(#6341)

[matthias] MINOR: state.cleanup.delay.ms default is 600,000 ms (10 minutes).

--
[...truncated 236.99 KB...]

kafka.api.AuthorizerIntegrationTest > testCommitWithNoGroupAccess PASSED

kafka.api.AuthorizerIntegrationTest > 
testTransactionalProducerInitTransactionsNoDescribeTransactionalIdAcl STARTED

kafka.api.AuthorizerIntegrationTest > 
testTransactionalProducerInitTransactionsNoDescribeTransactionalIdAcl PASSED

kafka.api.AuthorizerIntegrationTest > testUnauthorizedDeleteRecordsWithDescribe 
STARTED

kafka.api.AuthorizerIntegrationTest > testUnauthorizedDeleteRecordsWithDescribe 
PASSED

kafka.api.AuthorizerIntegrationTest > 
testCreateTopicAuthorizationWithClusterCreate STARTED

kafka.api.AuthorizerIntegrationTest > 
testCreateTopicAuthorizationWithClusterCreate PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithTopicAndGroupRead 
STARTED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithTopicAndGroupRead 
PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testAuthorizationWithTopicExisting STARTED

kafka.api.AuthorizerIntegrationTest > testAuthorizationWithTopicExisting PASSED

kafka.api.AuthorizerIntegrationTest > 
testUnauthorizedDeleteRecordsWithoutDescribe STARTED

kafka.api.AuthorizerIntegrationTest > 
testUnauthorizedDeleteRecordsWithoutDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testDescribeGroupApiWithNoGroupAcl STARTED

kafka.api.AuthorizerIntegrationTest > testDescribeGroupApiWithNoGroupAcl PASSED

kafka.api.AuthorizerIntegrationTest > 
testPatternSubscriptionMatchingInternalTopic STARTED

kafka.api.AuthorizerIntegrationTest > 
testPatternSubscriptionMatchingInternalTopic PASSED

kafka.api.AuthorizerIntegrationTest > 
testSendOffsetsWithNoConsumerGroupDescribeAccess STARTED

kafka.api.AuthorizerIntegrationTest > 
testSendOffsetsWithNoConsumerGroupDescribeAccess PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchTopicDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicAndGroupRead STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicAndGroupRead PASSED

kafka.api.AuthorizerIntegrationTest > 
testIdempotentProducerNoIdempotentWriteAclInInitProducerId STARTED

kafka.api.AuthorizerIntegrationTest > 
testIdempotentProducerNoIdempotentWriteAclInInitProducerId PASSED

kafka.api.AuthorizerIntegrationTest > 
testSimpleConsumeWithExplicitSeekAndNoGroupAccess STARTED

kafka.api.AuthorizerIntegrationTest > 
testSimpleConsumeWithExplicitSeekAndNoGroupAccess PASSED

kafka.api.SaslPlaintextConsumerTest > testCoordinatorFailover STARTED

kafka.api.SaslPlaintextConsumerTest > testCoordinatorFailover PASSED

kafka.api.SaslPlaintextConsumerTest > testSimpleConsumption STARTED

kafka.api.SaslPlaintextConsumerTest > testSimpleConsumption PASSED

kafka.api.AdminClientIntegrationTest > testDescribeReplicaLogDirs STARTED

kafka.api.AdminClientIntegrationTest > testDescribeReplicaLogDirs PASSED

kafka.api.AdminClientIntegrationTest > testInvalidAlterConfigs STARTED

kafka.api.AdminClientIntegrationTest > testInvalidAlterConfigs PASSED

kafka.api.AdminClientIntegrationTest > testAlterLogDirsAfterDeleteRecords 
STARTED

kafka.api.AdminClientIntegrationTest > testAlterLogDirsAfterDeleteRecords PASSED

kafka.api.AdminClientIntegrationTest > testConsumeAfterDeleteRecords STARTED

kafka.api.AdminClientIntegrationTest > testConsumeAfterDeleteRecords PASSED

kafka.api.AdminClientIntegrationTest > testClose STARTED

kafka.api.AdminClientIntegrationTest > testClose PASSED

kafka.api.AdminClientIntegrationTest > 
testReplicaCanFetchFromLogStartOffsetAfterDeleteRecords STARTED

kafka.api.AdminClientIntegrationTest > 
testReplicaCanFetchFromLogStartOffsetAfterDeleteRecords PASSED

kafka.api.AdminClientIntegrationTest > testMinimumRequestTimeouts STARTED

kafka.api.AdminClientIntegrationTest > testMinimumRequestTimeouts PASSED

kafka.api.AdminClientIntegrationTest > testForceClose STARTED

kafka.api.AdminClientIntegrationTest > testForceClose PASSED

kafka.api.AdminClientIntegrationTest > testListNodes STARTED

kafka.api.AdminClientIntegrationTest > testListNodes PASSED

kafka.api.AdminClientIntegrationTest > testDelayedClose STARTED

kafka.api.AdminClientIntegrationTest > testDelayedClose PASSED

kafka.api.AdminClientIntegrationTest > testCreateDeleteTopics STARTED

kafka.api.AdminClientIntegrationTest > testCreateDeleteTopics PASSED


Re: Speeding up integration tests

2019-02-28 Thread Viktor Somogyi-Vass
Hey All,

Thanks for the loads of ideas.

@Stanislav, @Sonke
I probably left it out from my email but I really imagined this as a
case-by-case basis change. If we think that it wouldn't cause problems,
then it might be applied. That way we'd limit the blast radius somewhat.
The 1 hour gain is really just the most optimistic scenario, I'm almost
sure that not every test could be transformed to use a common cluster.
We internally have an improvement for a half a year now which reruns the
flaky test classes at the end of the test gradle task, lets you know that
they were rerun and probably flaky. It fails the build only if the second
run of the test class was also unsuccessful. I think it works pretty good,
we mostly have green builds. If there is interest, I can try to contribute
that.

>I am also extremely annoyed at times by the amount of coffee I have to
drink before tests finish
Just please don't get a heart attack :)

@Ron, @Colin
You bring up a very good point that it is easier and frees up more
resources if we just run change specific tests and it's good to know that a
similar solution (meaning using a shared resource for testing) have failed
elsewhere. I second Ron on the test categorization though, although as a
first attempt I think using a flaky retry + running only the necessary
tests would help in both time saving and effectiveness. Also it would be
easier to achieve.

@Ismael
Yea, it'd be interesting to profile the startup/shutdown, I've never done
that. Perhaps I'll set some time apart for that :). It's definitely true
though that if we see a significant delay there we wouldn't just improve
the efficiency of the tests but also customer experience.

Best,
Viktor



On Thu, Feb 28, 2019 at 8:12 AM Ismael Juma  wrote:

> It's an idea that has come up before and worth exploring eventually.
> However, I'd first try to optimize the server startup/shutdown process. If
> we measure where the time is going, maybe some opportunities will present
> themselves.
>
> Ismael
>
> On Wed, Feb 27, 2019, 3:09 AM Viktor Somogyi-Vass  >
> wrote:
>
> > Hi Folks,
> >
> > I've been observing lately that unit tests usually take 2.5 hours to run
> > and a very big portion of these are the core tests where a new cluster is
> > spun up for every test. This takes most of the time. I ran a test
> > (TopicCommandWithAdminClient with 38 test inside) through the profiler
> and
> > it shows for instance that running the whole class itself took 10 minutes
> > and 37 seconds where the useful time was 5 minutes 18 seconds. That's a
> > 100% overhead. Without profiler the whole class takes 7 minutes and 48
> > seconds, so the useful time would be between 3-4 minutes. This is a
> bigger
> > test though, most of them won't take this much.
> > There are 74 classes that implement KafkaServerTestHarness and just
> running
> > :core:integrationTest takes almost 2 hours.
> >
> > I think we could greatly speed up these integration tests by just
> creating
> > the cluster once per class and perform the tests on separate methods. I
> > know that this a little bit contradicts to the principle that tests
> should
> > be independent but it seems like recreating clusters for each is a very
> > expensive operation. Also if the tests are acting on different resources
> > (different topics, etc.) then it might not hurt their independence. There
> > might be cases of course where this is not possible but I think there
> could
> > be a lot where it is.
> >
> > In the optimal case we could cut the testing time back by approximately
> an
> > hour. This would save resources and give quicker feedback for PR builds.
> >
> > What are your thoughts?
> > Has anyone thought about this or were there any attempts made?
> >
> > Best,
> > Viktor
> >
>


[jira] [Created] (KAFKA-8019) Better Scaling Experience for KStream

2019-02-28 Thread Boyang Chen (JIRA)
Boyang Chen created KAFKA-8019:
--

 Summary: Better Scaling Experience for KStream
 Key: KAFKA-8019
 URL: https://issues.apache.org/jira/browse/KAFKA-8019
 Project: Kafka
  Issue Type: New Feature
Reporter: Boyang Chen
Assignee: Boyang Chen


In our day-to-day work, we found it really hard to scale up a stateful stream 
application when its state store is very heavy. The caveat is that when the 
newly spinned hosts take ownership of some active tasks, so that they need to 
use non-trivial amount of time to restore the state store from changelog topic. 
The reassigned tasks would be available for unpredicted long time, which is not 
favorable. Secondly the current global rebalance stops the entire application 
process, which in a rolling host swap scenario would suggest an infinite 
resource shuffling without actual progress.

Following the community's [cooperative 
rebalancing|https://cwiki.apache.org/confluence/display/KAFKA/Incremental+Cooperative+Rebalancing%3A+Support+and+Policies]
 proposal, we need to build something similar for KStream to better handle the 
auto scaling experience.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8018) Flaky Test SaslSslAdminClientIntegrationTest#testLegacyAclOpsNeverAffectOrReturnPrefixed

2019-02-28 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-8018:
--

 Summary: Flaky Test 
SaslSslAdminClientIntegrationTest#testLegacyAclOpsNeverAffectOrReturnPrefixed
 Key: KAFKA-8018
 URL: https://issues.apache.org/jira/browse/KAFKA-8018
 Project: Kafka
  Issue Type: Bug
  Components: core, unit tests
Affects Versions: 2.2.0
Reporter: Matthias J. Sax


[https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/35/]
{quote}org.apache.zookeeper.KeeperException$SessionExpiredException: 
KeeperErrorCode = Session expired for 
/brokers/topics/__consumer_offsets/partitions/3/state at 
org.apache.zookeeper.KeeperException.create(KeeperException.java:130) at 
org.apache.zookeeper.KeeperException.create(KeeperException.java:54) at 
kafka.zookeeper.AsyncResponse.resultException(ZooKeeperClient.scala:534) at 
kafka.zk.KafkaZkClient.getTopicPartitionState(KafkaZkClient.scala:891) at 
kafka.zk.KafkaZkClient.getLeaderForPartition(KafkaZkClient.scala:901) at 
kafka.utils.TestUtils$.waitUntilLeaderIsElectedOrChanged(TestUtils.scala:669) 
at kafka.utils.TestUtils$.$anonfun$createTopic$1(TestUtils.scala:304) at 
kafka.utils.TestUtils$.$anonfun$createTopic$1$adapted(TestUtils.scala:302) at 
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
scala.collection.immutable.Range.foreach(Range.scala:158) at 
scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
kafka.utils.TestUtils$.createTopic(TestUtils.scala:302) at 
kafka.utils.TestUtils$.createOffsetsTopic(TestUtils.scala:350) at 
kafka.api.IntegrationTestHarness.doSetup(IntegrationTestHarness.scala:95) at 
kafka.api.IntegrationTestHarness.setUp(IntegrationTestHarness.scala:73) at 
kafka.api.AdminClientIntegrationTest.setUp(AdminClientIntegrationTest.scala:78) 
at 
kafka.api.SaslSslAdminClientIntegrationTest.setUp(SaslSslAdminClientIntegrationTest.scala:64){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-8016) Race condition resulting in IllegalStateException inside Consumer Heartbeat thread when consumer joins group

2019-02-28 Thread Stanislav Kozlovski (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stanislav Kozlovski resolved KAFKA-8016.

Resolution: Duplicate

This is a symptom of https://issues.apache.org/jira/browse/KAFKA-7831

> Race condition resulting in IllegalStateException inside Consumer Heartbeat 
> thread when consumer joins group
> 
>
> Key: KAFKA-8016
> URL: https://issues.apache.org/jira/browse/KAFKA-8016
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Stanislav Kozlovski
>Assignee: Stanislav Kozlovski
>Priority: Major
>
> I think the consumer heartbeat thread has a possibility for a race condition 
> that can crash it.
> I have seen the following client exception after a consumer group rebalance:
> {code:java}
> INFO  Fetcher  Resetting offset for partition _ to offset 32110985.
> INFO  Fetcher  Resetting offset for partition _ to offset 32108462.
> java.lang.IllegalStateException: No current assignment for partition X
> at 
> org.apache.kafka.clients.consumer.internals.SubscriptionState.assignedState(SubscriptionState.java:259)
> at 
> org.apache.kafka.clients.consumer.internals.SubscriptionState.seek(SubscriptionState.java:264)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.resetOffsetIfNeeded(Fetcher.java:562)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.access$2100(Fetcher.java:93)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher$2.onSuccess(Fetcher.java:589)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher$2.onSuccess(Fetcher.java:577)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:167)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:127)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.handleListOffsetResponse(Fetcher.java:784)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.access$2300(Fetcher.java:93)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher$4.onSuccess(Fetcher.java:704)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher$4.onSuccess(Fetcher.java:699)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:204)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:167)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:127)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.fireCompletion(ConsumerNetworkClient.java:563)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.firePendingCompletedRequests(ConsumerNetworkClient.java:390)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:293)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:300)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:948)
> {code}
> The logs also had this message in a close timeframe:
> {code:java}
> INFO ConsumerCoordinator Revoking previously assigned partitions [X, 
> ...]{code}
>  
> After investigating, I see that there might be a race condition:
>  
>  [Updating the fetch 
> positions|https://github.com/apache/kafka/blob/dfae20eceed67924c7122698a2c2f8b50a339133/clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java#L2213]
>  in the client [involves sending a `ListOffsetsRequest` request to the 
> broker|https://github.com/apache/kafka/blob/dfae20eceed67924c7122698a2c2f8b50a339133/clients/src/main/java/org/apache/kafka/clients/consumer/internals/Fetcher.java#L603].
>  It is possible for the Heartbeat thread to initiate the code that handles 
> the response in its run 
> loop([1|https://github.com/apache/kafka/blob/dfae20eceed67924c7122698a2c2f8b50a339133/clients/src/main/java/org/apache/kafka/clients/consumer/internals/AbstractCoordinator.java#L1036]->[2|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/consumer/internals/ConsumerNetworkClient.java#L247])
>  
> updateFetchPositions() is called from the public methods 
> `Consumer#position()` and  `Consumer#poll()`.
> The problem is that 
> [onJoinPrepare|https://github.com/apache/kafka/blob/dfae20eceed67924c7122698a2c2f8b50a339133/clients/src/main/java/org/apache/kafka/clients/consumer/internals/ConsumerCoordinator.java#L479]
>  may mutate the `subscriptions` variable while the offset response handling 
> by the heartbeat thread takes place. This results in `subscriptions.seek()` 
> throwing an IllegalStateException.



--
This message was sent by 

[jira] [Created] (KAFKA-8017) Narrow the scope of Streams' broker-upgrade-test

2019-02-28 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-8017:


 Summary: Narrow the scope of Streams' broker-upgrade-test
 Key: KAFKA-8017
 URL: https://issues.apache.org/jira/browse/KAFKA-8017
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Guozhang Wang


We had a streams-broker-upgrade test in which we kept the streams client as the 
dev version, and upgrade/downgrade brokers between arbitrary versions. This has 
several issues:

1) not all upgrade / downgrade paths are supported due to message format change.
2) even for those supported paths, we should consider the impact of 
inter.broker.protocol and message.format. More specifically: when upgrade to 
new version byte code, we should stick with the old protocol/version, when down 
grade to old version byte code, we should start with the old protocol/version.

A good reference to look at is the broker's own upgrade path where they listed 
all the possible path so far:

{code}
@parametrize(from_kafka_version=str(LATEST_1_1), 
to_message_format_version=None, compression_types=["none"])
@parametrize(from_kafka_version=str(LATEST_1_1), 
to_message_format_version=None, compression_types=["lz4"])
@parametrize(from_kafka_version=str(LATEST_1_0), 
to_message_format_version=None, compression_types=["none"])
@parametrize(from_kafka_version=str(LATEST_1_0), 
to_message_format_version=None, compression_types=["snappy"])
@parametrize(from_kafka_version=str(LATEST_0_11_0), 
to_message_format_version=None, compression_types=["gzip"])
@parametrize(from_kafka_version=str(LATEST_0_11_0), 
to_message_format_version=None, compression_types=["lz4"])
@parametrize(from_kafka_version=str(LATEST_0_10_2), 
to_message_format_version=str(LATEST_0_9), compression_types=["none"])
@parametrize(from_kafka_version=str(LATEST_0_10_2), 
to_message_format_version=str(LATEST_0_10), compression_types=["snappy"])
@parametrize(from_kafka_version=str(LATEST_0_10_2), 
to_message_format_version=None, compression_types=["none"])
@parametrize(from_kafka_version=str(LATEST_0_10_2), 
to_message_format_version=None, compression_types=["lz4"])
@parametrize(from_kafka_version=str(LATEST_0_10_1), 
to_message_format_version=None, compression_types=["lz4"])
@parametrize(from_kafka_version=str(LATEST_0_10_1), 
to_message_format_version=None, compression_types=["snappy"])
@parametrize(from_kafka_version=str(LATEST_0_10_0), 
to_message_format_version=None, compression_types=["snappy"])
@parametrize(from_kafka_version=str(LATEST_0_10_0), 
to_message_format_version=None, compression_types=["lz4"])
@cluster(num_nodes=7)
@parametrize(from_kafka_version=str(LATEST_0_9), 
to_message_format_version=None, compression_types=["none"], 
security_protocol="SASL_SSL")
@cluster(num_nodes=6)
@parametrize(from_kafka_version=str(LATEST_0_9), 
to_message_format_version=None, compression_types=["snappy"])
@parametrize(from_kafka_version=str(LATEST_0_9), 
to_message_format_version=None, compression_types=["lz4"])
@parametrize(from_kafka_version=str(LATEST_0_9), 
to_message_format_version=str(LATEST_0_9), compression_types=["none"])
@parametrize(from_kafka_version=str(LATEST_0_9), 
to_message_format_version=str(LATEST_0_9), compression_types=["lz4"])
@cluster(num_nodes=7)
@parametrize(from_kafka_version=str(LATEST_0_8_2), 
to_message_format_version=None, compression_types=["none"])
@parametrize(from_kafka_version=str(LATEST_0_8_2), 
to_message_format_version=None, compression_types=["snappy"])
def test_upgrade(self, from_kafka_version, to_message_format_version, 
compression_types,
 security_protocol="PLAINTEXT"):
{code}

And their upgrade code is:

{code}
def perform_upgrade(self, from_kafka_version, to_message_format_version=None):
self.logger.info("First pass bounce - rolling upgrade")
for node in self.kafka.nodes:
self.kafka.stop_node(node)
node.version = DEV_BRANCH
node.config[config_property.INTER_BROKER_PROTOCOL_VERSION] = 
from_kafka_version
node.config[config_property.MESSAGE_FORMAT_VERSION] = 
from_kafka_version
self.kafka.start_node(node)

self.logger.info("Second pass bounce - remove 
inter.broker.protocol.version config")
for node in self.kafka.nodes:
self.kafka.stop_node(node)
del node.config[config_property.INTER_BROKER_PROTOCOL_VERSION]
if to_message_format_version is None:
del node.config[config_property.MESSAGE_FORMAT_VERSION]
else:
node.config[config_property.MESSAGE_FORMAT_VERSION] = 
to_message_format_version
self.kafka.start_node(node)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk8 #3423

2019-02-28 Thread Apache Jenkins Server
See 


Changes:

[vahid.hashemian] KAFKA-7962: Avoid NPE for StickyAssignor (#6308)

--
[...truncated 3.94 MB...]

kafka.api.GroupAuthorizerIntegrationTest > testAuthorizationWithTopicExisting 
STARTED

kafka.api.GroupAuthorizerIntegrationTest > testAuthorizationWithTopicExisting 
PASSED

kafka.api.GroupAuthorizerIntegrationTest > 
testUnauthorizedDeleteRecordsWithoutDescribe STARTED

kafka.api.GroupAuthorizerIntegrationTest > 
testUnauthorizedDeleteRecordsWithoutDescribe PASSED

kafka.api.GroupAuthorizerIntegrationTest > testMetadataWithTopicDescribe STARTED

kafka.api.GroupAuthorizerIntegrationTest > testMetadataWithTopicDescribe PASSED

kafka.api.GroupAuthorizerIntegrationTest > testProduceWithTopicDescribe STARTED

kafka.api.GroupAuthorizerIntegrationTest > testProduceWithTopicDescribe PASSED

kafka.api.GroupAuthorizerIntegrationTest > testDescribeGroupApiWithNoGroupAcl 
STARTED

kafka.api.GroupAuthorizerIntegrationTest > testDescribeGroupApiWithNoGroupAcl 
PASSED

kafka.api.GroupAuthorizerIntegrationTest > 
testPatternSubscriptionMatchingInternalTopic STARTED

kafka.api.GroupAuthorizerIntegrationTest > 
testPatternSubscriptionMatchingInternalTopic PASSED

kafka.api.GroupAuthorizerIntegrationTest > 
testSendOffsetsWithNoConsumerGroupDescribeAccess STARTED

kafka.api.GroupAuthorizerIntegrationTest > 
testSendOffsetsWithNoConsumerGroupDescribeAccess PASSED

kafka.api.GroupAuthorizerIntegrationTest > testOffsetFetchTopicDescribe STARTED

kafka.api.GroupAuthorizerIntegrationTest > testOffsetFetchTopicDescribe PASSED

kafka.api.GroupAuthorizerIntegrationTest > testCommitWithTopicAndGroupRead 
STARTED

kafka.api.GroupAuthorizerIntegrationTest > testCommitWithTopicAndGroupRead 
PASSED

kafka.api.GroupAuthorizerIntegrationTest > 
testIdempotentProducerNoIdempotentWriteAclInInitProducerId STARTED

kafka.api.GroupAuthorizerIntegrationTest > 
testIdempotentProducerNoIdempotentWriteAclInInitProducerId PASSED

kafka.api.GroupAuthorizerIntegrationTest > 
testSimpleConsumeWithExplicitSeekAndNoGroupAccess STARTED

kafka.api.GroupAuthorizerIntegrationTest > 
testSimpleConsumeWithExplicitSeekAndNoGroupAccess PASSED

kafka.api.SaslPlaintextConsumerTest > testCoordinatorFailover STARTED

kafka.api.SaslPlaintextConsumerTest > testCoordinatorFailover PASSED

kafka.api.SaslPlaintextConsumerTest > testSimpleConsumption STARTED

kafka.api.SaslPlaintextConsumerTest > testSimpleConsumption PASSED

kafka.api.TransactionsTest > testBasicTransactions STARTED

kafka.api.TransactionsTest > testBasicTransactions PASSED

kafka.api.TransactionsTest > testFencingOnSendOffsets STARTED

kafka.api.TransactionsTest > testFencingOnSendOffsets PASSED

kafka.api.TransactionsTest > testFencingOnAddPartitions STARTED

kafka.api.TransactionsTest > testFencingOnAddPartitions PASSED

kafka.api.TransactionsTest > testFencingOnTransactionExpiration STARTED

kafka.api.TransactionsTest > testFencingOnTransactionExpiration PASSED

kafka.api.TransactionsTest > testDelayedFetchIncludesAbortedTransaction STARTED

kafka.api.TransactionsTest > testDelayedFetchIncludesAbortedTransaction PASSED

kafka.api.TransactionsTest > testOffsetMetadataInSendOffsetsToTransaction 
STARTED

kafka.api.TransactionsTest > testOffsetMetadataInSendOffsetsToTransaction PASSED

kafka.api.TransactionsTest > testConsecutivelyRunInitTransactions STARTED

kafka.api.TransactionsTest > testConsecutivelyRunInitTransactions PASSED

kafka.api.TransactionsTest > testReadCommittedConsumerShouldNotSeeUndecidedData 
STARTED

kafka.api.TransactionsTest > testReadCommittedConsumerShouldNotSeeUndecidedData 
PASSED

kafka.api.TransactionsTest > testFencingOnSend STARTED

kafka.api.TransactionsTest > testFencingOnSend PASSED

kafka.api.TransactionsTest > testFencingOnCommit STARTED

kafka.api.TransactionsTest > testFencingOnCommit PASSED

kafka.api.TransactionsTest > testMultipleMarkersOneLeader STARTED

kafka.api.TransactionsTest > testMultipleMarkersOneLeader PASSED

kafka.api.TransactionsTest > testCommitTransactionTimeout STARTED

kafka.api.TransactionsTest > testCommitTransactionTimeout PASSED

kafka.api.TransactionsTest > testSendOffsets STARTED

kafka.api.TransactionsTest > testSendOffsets PASSED

kafka.api.LogAppendTimeTest > testProduceConsume STARTED

kafka.api.LogAppendTimeTest > testProduceConsume PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testAcls STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testAcls PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testTwoConsumersWithDifferentSaslCredentials STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testTwoConsumersWithDifferentSaslCredentials PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe PASSED


[jira] [Created] (KAFKA-8016) Race condition resulting in IllegalStateException inside Consumer Heartbeat thread when consumer joins group

2019-02-28 Thread Stanislav Kozlovski (JIRA)
Stanislav Kozlovski created KAFKA-8016:
--

 Summary: Race condition resulting in IllegalStateException inside 
Consumer Heartbeat thread when consumer joins group
 Key: KAFKA-8016
 URL: https://issues.apache.org/jira/browse/KAFKA-8016
 Project: Kafka
  Issue Type: Improvement
Reporter: Stanislav Kozlovski
Assignee: Stanislav Kozlovski


I have seen the following client exception after a consumer group rebalance:
{code:java}
INFO  Fetcher  Resetting offset for partition _ to offset 32110985.
INFO  Fetcher  Resetting offset for partition _ to offset 32108462.

java.lang.IllegalStateException: No current assignment for partition X
at 
org.apache.kafka.clients.consumer.internals.SubscriptionState.assignedState(SubscriptionState.java:259)
at 
org.apache.kafka.clients.consumer.internals.SubscriptionState.seek(SubscriptionState.java:264)
at 
org.apache.kafka.clients.consumer.internals.Fetcher.resetOffsetIfNeeded(Fetcher.java:562)
at 
org.apache.kafka.clients.consumer.internals.Fetcher.access$2100(Fetcher.java:93)
at 
org.apache.kafka.clients.consumer.internals.Fetcher$2.onSuccess(Fetcher.java:589)
at 
org.apache.kafka.clients.consumer.internals.Fetcher$2.onSuccess(Fetcher.java:577)
at 
org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:167)
at 
org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:127)
at 
org.apache.kafka.clients.consumer.internals.Fetcher.handleListOffsetResponse(Fetcher.java:784)
at 
org.apache.kafka.clients.consumer.internals.Fetcher.access$2300(Fetcher.java:93)
at 
org.apache.kafka.clients.consumer.internals.Fetcher$4.onSuccess(Fetcher.java:704)
at 
org.apache.kafka.clients.consumer.internals.Fetcher$4.onSuccess(Fetcher.java:699)
at 
org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:204)
at 
org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:167)
at 
org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:127)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.fireCompletion(ConsumerNetworkClient.java:563)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.firePendingCompletedRequests(ConsumerNetworkClient.java:390)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:293)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:300)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:948)
{code}
The logs also had this message in a close timeframe:
{code:java}
INFO ConsumerCoordinator Revoking previously assigned partitions [X, ...]{code}
 

After investigating, I see that there might be a race condition:[

|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/consumer/internals/ConsumerNetworkClient.java#L247]

[Updating the fetch 
positions|https://github.com/apache/kafka/blob/dfae20eceed67924c7122698a2c2f8b50a339133/clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java#L2213]
 in the client [involves sending a `ListOffsetsRequest` request to the 
broker|https://github.com/apache/kafka/blob/dfae20eceed67924c7122698a2c2f8b50a339133/clients/src/main/java/org/apache/kafka/clients/consumer/internals/Fetcher.java#L603].
 It is possible for the Heartbeat thread to handle the response here: 
[1|https://github.com/apache/kafka/blob/dfae20eceed67924c7122698a2c2f8b50a339133/clients/src/main/java/org/apache/kafka/clients/consumer/internals/AbstractCoordinator.java#L1036]->[2
|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/consumer/internals/ConsumerNetworkClient.java#L247]This
 happens when the either `Consumer#position()` or  `Consumer#poll()` gets 
called.
The problem is that 
[onJoinPrepare|https://github.com/apache/kafka/blob/dfae20eceed67924c7122698a2c2f8b50a339133/clients/src/main/java/org/apache/kafka/clients/consumer/internals/ConsumerCoordinator.java#L479]
 may mutate the `subscriptions` variable while the offset response handling by 
the heartbeat thread takes place. This results in `subscriptions.seek()` 
throwing an IllegalStateException



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7940) Flaky Test CustomQuotaCallbackTest#testCustomQuotaCallback

2019-02-28 Thread Stanislav Kozlovski (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stanislav Kozlovski resolved KAFKA-7940.

Resolution: Fixed
  Reviewer: Rajini Sivaram

Fixed with https://github.com/apache/kafka/pull/6330#event-2169171736

> Flaky Test CustomQuotaCallbackTest#testCustomQuotaCallback
> --
>
> Key: KAFKA-7940
> URL: https://issues.apache.org/jira/browse/KAFKA-7940
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Assignee: Stanislav Kozlovski
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.1
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/14/]
> {quote}java.lang.AssertionError: Too many quotaLimit calls Map(PRODUCE -> 1, 
> FETCH -> 1, REQUEST -> 4) at org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.assertTrue(Assert.java:41) at 
> kafka.api.CustomQuotaCallbackTest.testCustomQuotaCallback(CustomQuotaCallbackTest.scala:105){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Jenkins build is back to normal : kafka-trunk-jdk11 #324

2019-02-28 Thread Apache Jenkins Server
See 




Re: [VOTE] 2.2.0 RC0 [CANCELED]

2019-02-28 Thread Matthias J. Sax
We discovered a blocker issue:
https://issues.apache.org/jira/browse/KAFKA-8012

Thus, I am cancelling the current vote. I will cut a new RC after the
blocker is fixed.

Nevertheless, please keep testing. If we find something else before the
new RC is cut, it would streamline the overall process.


Thanks a lot!


-Matthias



On 2/27/19 1:43 AM, Satish Duggana wrote:
> +1 (non-binding)
> 
> - Ran testAll/releaseTarGzAll successfully with NO failures.
> - Ran through quickstart of core/streams on builds generated from 2.2.0-rc0
> tag
> - Ran few internal apps targeting to topics on 3 node cluster.
> 
> Thanks for running the release Matthias!
> 
> On Tue, Feb 26, 2019 at 8:17 PM Adam Bellemare 
> wrote:
> 
>> Downloaded, compiled and passed all tests successfully.
>>
>> Ran quickstart (https://kafka.apache.org/quickstart) up to step 6 without
>> issue.
>>
>> (+1 non-binding).
>>
>> Adam
>>
>>
>>
>> On Mon, Feb 25, 2019 at 9:19 PM Matthias J. Sax 
>> wrote:
>>
>>> @Stephane
>>>
>>> Thanks! You are right (I copied the list from an older draft without
>>> double checking).
>>>
>>> On the release Wiki page, it's correctly listed as postponed:
>>>
>> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=100827512
>>>
>>>
>>> @Viktor
>>>
>>> Thanks. This will not block the release, but I'll make sure to include
>>> it in the webpage update.
>>>
>>>
>>>
>>> -Matthias
>>>
>>> On 2/25/19 5:16 AM, Viktor Somogyi-Vass wrote:
 Hi Matthias,

 I've noticed a minor line break issue in the upgrade docs. I've
>> created a
 small PR for that: https://github.com/apache/kafka/pull/6320

 Best,
 Viktor

 On Sun, Feb 24, 2019 at 10:16 PM Stephane Maarek <
>>> kafka.tutori...@gmail.com>
 wrote:

> Hi Matthias
>
> Thanks for this
> Running through the list of KIPs. I think this is not included in 2.2:
>
> - Allow clients to suppress auto-topic-creation
>
> Regards
> Stephane
>
> On Sun, Feb 24, 2019 at 1:03 AM Matthias J. Sax <
>> matth...@confluent.io>
> wrote:
>
>> Hello Kafka users, developers and client-developers,
>>
>> This is the first candidate for the release of Apache Kafka 2.2.0.
>>
>> This is a minor release with the follow highlight:
>>
>>  - Added SSL support for custom principle name
>>  - Allow SASL connections to periodically re-authenticate
>>  - Improved consumer group management
>>- default group.id is `null` instead of empty string
>>  - Add --under-min-isr option to describe topics command
>>  - Allow clients to suppress auto-topic-creation
>>  - API improvement
>>- Producer: introduce close(Duration)
>>- AdminClient: introduce close(Duration)
>>- Kafka Streams: new flatTransform() operator in Streams DSL
>>- KafkaStreams (and other classed) now implement AutoClosable to
>> support try-with-resource
>>- New Serdes and default method implementations
>>  - Kafka Streams exposed internal client.id via ThreadMetadata
>>  - Metric improvements:  All `-min`, `-avg` and `-max` metrics will
>> now
>> output `NaN` as default value
>>
>>
>> Release notes for the 2.2.0 release:
>> http://home.apache.org/~mjsax/kafka-2.2.0-rc0/RELEASE_NOTES.html
>>
>> *** Please download, test and vote by Friday, March 1, 9am PST.
>>
>> Kafka's KEYS file containing PGP keys we use to sign the release:
>> http://kafka.apache.org/KEYS
>>
>> * Release artifacts to be voted upon (source and binary):
>> http://home.apache.org/~mjsax/kafka-2.2.0-rc0/
>>
>> * Maven artifacts to be voted upon:
>>
>> https://repository.apache.org/content/groups/staging/org/apache/kafka/
>>
>> * Javadoc:
>> http://home.apache.org/~mjsax/kafka-2.2.0-rc0/javadoc/
>>
>> * Tag to be voted upon (off 2.2 branch) is the 2.2.0 tag:
>> https://github.com/apache/kafka/releases/tag/2.2.0-rc0
>>
>> * Documentation:
>> https://kafka.apache.org/22/documentation.html
>>
>> * Protocol:
>> https://kafka.apache.org/22/protocol.html
>>
>> * Successful Jenkins builds for the 2.2 branch:
>> Unit/integration tests:
>>> https://builds.apache.org/job/kafka-2.2-jdk8/31/
>>
>> * System tests:
>> https://jenkins.confluent.io/job/system-test-kafka/job/2.2/
>>
>>
>>
>>
>> Thanks,
>>
>> -Matthias
>>
>>
>

>>>
>>>
>>
> 



signature.asc
Description: OpenPGP digital signature


[jira] [Created] (KAFKA-8015) Flaky Test SaslGssapiSslEndToEndAuthorizationTest#testProduceConsumeTopicAutoCreateTopicCreateAcl

2019-02-28 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-8015:
--

 Summary: Flaky Test 
SaslGssapiSslEndToEndAuthorizationTest#testProduceConsumeTopicAutoCreateTopicCreateAcl
 Key: KAFKA-8015
 URL: https://issues.apache.org/jira/browse/KAFKA-8015
 Project: Kafka
  Issue Type: Bug
  Components: core, unit tests
Affects Versions: 2.3.0
Reporter: Matthias J. Sax


[https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/3422/tests]
{quote}java.lang.AssertionError: Partition [e2etopic,0] metadata not propagated 
after 15000 ms
at kafka.utils.TestUtils$.fail(TestUtils.scala:356)
at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:766)
at kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:855)
at kafka.utils.TestUtils$$anonfun$createTopic$1.apply(TestUtils.scala:303)
at kafka.utils.TestUtils$$anonfun$createTopic$1.apply(TestUtils.scala:302)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.immutable.Range.foreach(Range.scala:160)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at kafka.utils.TestUtils$.createTopic(TestUtils.scala:302)
at 
kafka.integration.KafkaServerTestHarness.createTopic(KafkaServerTestHarness.scala:125)
at 
kafka.api.EndToEndAuthorizationTest.setUp(EndToEndAuthorizationTest.scala:189)
at 
kafka.api.SaslEndToEndAuthorizationTest.setUp(SaslEndToEndAuthorizationTest.scala:45){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)