Build failed in Jenkins: kafka-0.10.1-jdk7 #21

2016-09-28 Thread Apache Jenkins Server
See 

--
Started by an SCM change
Started by an SCM change
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on jenkins-test-76e (Ubuntu ubuntu jenkins-cloud-8GB 
jenkins-cloud-4GB cloud-slave) in workspace 

java.io.IOException: Failed to mkdirs: 

at hudson.FilePath.mkdirs(FilePath.java:1191)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1267)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:604)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:529)
at hudson.model.Run.execute(Run.java:1741)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:410)
Retrying after 10 seconds
java.io.IOException: Failed to mkdirs: 

at hudson.FilePath.mkdirs(FilePath.java:1191)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1267)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:604)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:529)
at hudson.model.Run.execute(Run.java:1741)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:410)
Retrying after 10 seconds
java.io.IOException: Failed to mkdirs: 

at hudson.FilePath.mkdirs(FilePath.java:1191)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1267)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:604)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:529)
at hudson.model.Run.execute(Run.java:1741)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:410)
Recording test results
ERROR: Build step failed with exception
 does not exist.
at 
org.apache.tools.ant.types.AbstractFileSet.getDirectoryScanner(AbstractFileSet.java:483)
at 
org.apache.tools.ant.types.AbstractFileSet.getDirectoryScanner(AbstractFileSet.java:460)
at 
hudson.tasks.junit.JUnitParser$ParseResultCallable.invoke(JUnitParser.java:127)
at 
hudson.tasks.junit.JUnitParser$ParseResultCallable.invoke(JUnitParser.java:107)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:2772)
at hudson.remoting.UserRequest.perform(UserRequest.java:153)
at hudson.remoting.UserRequest.perform(UserRequest.java:50)
at hudson.remoting.Request$2.run(Request.java:332)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
at ..remote call to jenkins-test-76e(Native Method)
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1416)
at hudson.remoting.UserResponse.retrieve(UserRequest.java:253)
at hudson.remoting.Channel.call(Channel.java:781)
at hudson.FilePath.act(FilePath.java:1007)
at hudson.FilePath.act(FilePath.java:996)
at hudson.tasks.junit.JUnitParser.parseResult(JUnitParser.java:103)
at 
hudson.tasks.junit.JUnitResultArchiver.parse(JUnitResultArchiver.java:128)
at 
hudson.tasks.junit.JUnitResultArchiver.perform(JUnitResultArchiver.java:149)
at 
hudson.tasks.BuildStepCompatibilityLayer.perform(BuildStepCompatibilityLayer.java:78)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:779)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(AbstractBuild.java:720)
at hudson.model.Build$BuildExecution.post2(Build.java:185)
at 

Build failed in Jenkins: kafka-0.10.1-jdk7 #20

2016-09-28 Thread Apache Jenkins Server
See 

Changes:

[jason] KAFKA-4194; Follow-up improvements/testing for ListOffsets v1 (KIP-79)

--
[...truncated 11851 lines...]

org.apache.kafka.clients.NetworkClientTest > 
testSimpleRequestResponseWithStaticNodes PASSED

org.apache.kafka.clients.NetworkClientTest > testConnectionDelay STARTED

org.apache.kafka.clients.NetworkClientTest > testConnectionDelay PASSED

org.apache.kafka.clients.NetworkClientTest > testSendToUnreadyNode STARTED

org.apache.kafka.clients.NetworkClientTest > testSendToUnreadyNode PASSED

org.apache.kafka.clients.NetworkClientTest > testConnectionDelayDisconnected 
STARTED

org.apache.kafka.clients.NetworkClientTest > testConnectionDelayDisconnected 
PASSED

org.apache.kafka.clients.MetadataTest > testListenerCanUnregister STARTED

org.apache.kafka.clients.MetadataTest > testListenerCanUnregister PASSED

org.apache.kafka.clients.MetadataTest > testTopicExpiry STARTED

org.apache.kafka.clients.MetadataTest > testTopicExpiry PASSED

org.apache.kafka.clients.MetadataTest > testFailedUpdate STARTED

org.apache.kafka.clients.MetadataTest > testFailedUpdate PASSED

org.apache.kafka.clients.MetadataTest > testMetadataUpdateWaitTime STARTED

org.apache.kafka.clients.MetadataTest > testMetadataUpdateWaitTime PASSED

org.apache.kafka.clients.MetadataTest > testUpdateWithNeedMetadataForAllTopics 
STARTED

org.apache.kafka.clients.MetadataTest > testUpdateWithNeedMetadataForAllTopics 
PASSED

org.apache.kafka.clients.MetadataTest > testClusterListenerGetsNotifiedOfUpdate 
STARTED

org.apache.kafka.clients.MetadataTest > testClusterListenerGetsNotifiedOfUpdate 
PASSED

org.apache.kafka.clients.MetadataTest > testMetadata STARTED

org.apache.kafka.clients.MetadataTest > testMetadata PASSED

org.apache.kafka.clients.MetadataTest > testListenerGetsNotifiedOfUpdate STARTED

org.apache.kafka.clients.MetadataTest > testListenerGetsNotifiedOfUpdate PASSED

org.apache.kafka.clients.MetadataTest > testNonExpiringMetadata STARTED

org.apache.kafka.clients.MetadataTest > testNonExpiringMetadata PASSED
:clients:determineCommitId UP-TO-DATE
:clients:createVersionFile
:clients:jar UP-TO-DATE
:core:compileJava UP-TO-DATE
:core:compileScala
:79:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^
:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,

  ^
:37:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 expireTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP) {

  ^
:523:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
  if (offsetAndMetadata.expireTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

^
:310:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
  if (partitionData.timestamp == 
OffsetCommitRequest.DEFAULT_TIMESTAMP)
 ^
:93:
 class ProducerConfig in package producer is deprecated: This class has been 
deprecated and will be removed in a future release. Please use 
org.apache.kafka.clients.producer.ProducerConfig instead.
val producerConfig = new ProducerConfig(props)
 ^
:94:
 method fetchTopicMetadata in 

Build failed in Jenkins: kafka-trunk-jdk7 #1581

2016-09-28 Thread Apache Jenkins Server
See 

--
[...truncated 22 lines...]
remote: Compressing objects:   5% (790/15800)   
remote: Compressing objects:   6% (948/15800)   
remote: Compressing objects:   7% (1106/15800)   
remote: Compressing objects:   8% (1264/15800)   
remote: Compressing objects:   9% (1422/15800)   
remote: Compressing objects:  10% (1580/15800)   
remote: Compressing objects:  11% (1738/15800)   
remote: Compressing objects:  12% (1896/15800)   
remote: Compressing objects:  13% (2054/15800)   
remote: Compressing objects:  14% (2212/15800)   
remote: Compressing objects:  15% (2370/15800)   
remote: Compressing objects:  16% (2528/15800)   
remote: Compressing objects:  17% (2686/15800)   
remote: Compressing objects:  18% (2844/15800)   
remote: Compressing objects:  19% (3002/15800)   
remote: Compressing objects:  20% (3160/15800)   
remote: Compressing objects:  21% (3318/15800)   
remote: Compressing objects:  22% (3476/15800)   
remote: Compressing objects:  23% (3634/15800)   
remote: Compressing objects:  24% (3792/15800)   
remote: Compressing objects:  25% (3950/15800)   
remote: Compressing objects:  26% (4108/15800)   
remote: Compressing objects:  27% (4266/15800)   
remote: Compressing objects:  28% (4424/15800)   
remote: Compressing objects:  29% (4582/15800)   
remote: Compressing objects:  30% (4740/15800)   
remote: Compressing objects:  31% (4898/15800)   
remote: Compressing objects:  32% (5056/15800)   
remote: Compressing objects:  33% (5214/15800)   
remote: Compressing objects:  34% (5372/15800)   
remote: Compressing objects:  35% (5530/15800)   
remote: Compressing objects:  36% (5688/15800)   
remote: Compressing objects:  37% (5846/15800)   
remote: Compressing objects:  38% (6004/15800)   
remote: Compressing objects:  39% (6162/15800)   
remote: Compressing objects:  40% (6320/15800)   
remote: Compressing objects:  41% (6478/15800)   
remote: Compressing objects:  42% (6636/15800)   
remote: Compressing objects:  43% (6794/15800)   
remote: Compressing objects:  44% (6952/15800)   
remote: Compressing objects:  45% (7110/15800)   
remote: Compressing objects:  46% (7268/15800)   
remote: Compressing objects:  47% (7426/15800)   
remote: Compressing objects:  48% (7584/15800)   
remote: Compressing objects:  49% (7742/15800)   
remote: Compressing objects:  50% (7900/15800)   
remote: Compressing objects:  51% (8058/15800)   
remote: Compressing objects:  52% (8216/15800)   
remote: Compressing objects:  53% (8374/15800)   
remote: Compressing objects:  54% (8532/15800)   
remote: Compressing objects:  55% (8690/15800)   
remote: Compressing objects:  56% (8848/15800)   
remote: Compressing objects:  57% (9006/15800)   
remote: Compressing objects:  58% (9164/15800)   
remote: Compressing objects:  59% (9322/15800)   
remote: Compressing objects:  60% (9480/15800)   
remote: Compressing objects:  61% (9638/15800)   
remote: Compressing objects:  62% (9796/15800)   
remote: Compressing objects:  63% (9954/15800)   
remote: Compressing objects:  64% (10112/15800)   
remote: Compressing objects:  65% (10270/15800)   
remote: Compressing objects:  66% (10428/15800)   
remote: Compressing objects:  67% (10586/15800)   
remote: Compressing objects:  68% (10744/15800)   
remote: Compressing objects:  69% (10902/15800)   
remote: Compressing objects:  70% (11060/15800)   
remote: Compressing objects:  71% (11218/15800)   
remote: Compressing objects:  72% (11376/15800)   
remote: Compressing objects:  73% (11534/15800)   
remote: Compressing objects:  74% (11692/15800)   
remote: Compressing objects:  75% (11850/15800)   
remote: Compressing objects:  76% (12008/15800)   
remote: Compressing objects:  77% (12166/15800)   
remote: Compressing objects:  78% (12324/15800)   
remote: Compressing objects:  79% (12482/15800)   
remote: Compressing objects:  80% (12640/15800)   
remote: Compressing objects:  81% (12798/15800)   
remote: Compressing objects:  82% (12956/15800)   
remote: Compressing objects:  83% (13114/15800)   
remote: Compressing objects:  84% (13272/15800)   
remote: Compressing objects:  85% (13430/15800)   
remote: Compressing objects:  86% (13588/15800)   
remote: Compressing objects:  87% (13746/15800)   
remote: Compressing objects:  

Build failed in Jenkins: kafka-trunk-jdk8 #924

2016-09-28 Thread Apache Jenkins Server
See 

Changes:

[jason] KAFKA-4194; Follow-up improvements/testing for ListOffsets v1 (KIP-79)

[jason] MINOR: Set JVM parameters for the Gradle Test executor processes

--
[...truncated 13787 lines...]
org.apache.kafka.streams.kstream.internals.KeyValuePrinterProcessorTest > 
testPrintKeyValueDefaultSerde PASSED

org.apache.kafka.streams.kstream.internals.KTableSourceTest > 
testSendingOldValue STARTED

org.apache.kafka.streams.kstream.internals.KTableSourceTest > 
testSendingOldValue PASSED

org.apache.kafka.streams.kstream.internals.KTableSourceTest > 
testNotSendingOldValue STARTED

org.apache.kafka.streams.kstream.internals.KTableSourceTest > 
testNotSendingOldValue PASSED

org.apache.kafka.streams.kstream.internals.KTableSourceTest > testKTable STARTED

org.apache.kafka.streams.kstream.internals.KTableSourceTest > testKTable PASSED

org.apache.kafka.streams.kstream.internals.KTableSourceTest > testValueGetter 
STARTED

org.apache.kafka.streams.kstream.internals.KTableSourceTest > testValueGetter 
PASSED

org.apache.kafka.streams.kstream.internals.KTableMapKeysTest > 
testMapKeysConvertingToStream STARTED

org.apache.kafka.streams.kstream.internals.KTableMapKeysTest > 
testMapKeysConvertingToStream PASSED

org.apache.kafka.streams.kstream.internals.KStreamForeachTest > testForeach 
STARTED

org.apache.kafka.streams.kstream.internals.KStreamForeachTest > testForeach 
PASSED

org.apache.kafka.streams.kstream.internals.KTableKTableOuterJoinTest > 
testSendingOldValue STARTED

org.apache.kafka.streams.kstream.internals.KTableKTableOuterJoinTest > 
testSendingOldValue PASSED

org.apache.kafka.streams.kstream.internals.KTableKTableOuterJoinTest > testJoin 
STARTED

org.apache.kafka.streams.kstream.internals.KTableKTableOuterJoinTest > testJoin 
PASSED

org.apache.kafka.streams.kstream.internals.KTableKTableOuterJoinTest > 
testNotSendingOldValue STARTED

org.apache.kafka.streams.kstream.internals.KTableKTableOuterJoinTest > 
testNotSendingOldValue PASSED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > 
testOuterJoin STARTED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > 
testOuterJoin PASSED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > testJoin 
STARTED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > testJoin 
PASSED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > 
testWindowing STARTED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > 
testWindowing PASSED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > 
testAsymetricWindowingBefore STARTED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > 
testAsymetricWindowingBefore PASSED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > 
testAsymetricWindowingAfter STARTED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > 
testAsymetricWindowingAfter PASSED

org.apache.kafka.streams.kstream.internals.KStreamFlatMapValuesTest > 
testFlatMapValues STARTED

org.apache.kafka.streams.kstream.internals.KStreamFlatMapValuesTest > 
testFlatMapValues PASSED

org.apache.kafka.streams.kstream.internals.KTableKTableJoinTest > testJoin 
STARTED

org.apache.kafka.streams.kstream.internals.KTableKTableJoinTest > testJoin 
PASSED

org.apache.kafka.streams.kstream.internals.KTableKTableJoinTest > 
testNotSendingOldValues STARTED

org.apache.kafka.streams.kstream.internals.KTableKTableJoinTest > 
testNotSendingOldValues PASSED

org.apache.kafka.streams.kstream.internals.KTableKTableJoinTest > 
testSendingOldValues STARTED

org.apache.kafka.streams.kstream.internals.KTableKTableJoinTest > 
testSendingOldValues PASSED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > testAggBasic 
STARTED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > testAggBasic 
PASSED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > testCount 
STARTED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > testCount 
PASSED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > 
testAggCoalesced STARTED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > 
testAggCoalesced PASSED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > 
testAggRepartition STARTED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > 
testAggRepartition PASSED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > 
testRemoveOldBeforeAddNew STARTED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > 
testRemoveOldBeforeAddNew PASSED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > 
testCountCoalesced STARTED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > 
testCountCoalesced PASSED

org.apache.kafka.streams.kstream.internals.KStreamFilterTest > testFilterNot 
STARTED


Build failed in Jenkins: kafka-trunk-jdk7 #1580

2016-09-28 Thread Apache Jenkins Server
See 

Changes:

[jay.kreps] MINOR: Improve introduction section in docs to better cover connect 
and

[ismael] KAFKA-4209; Reduce run time for quota integration tests

--
[...truncated 2258 lines...]
kafka.api.AuthorizerIntegrationTest > testProduceWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testProduceWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicWrite STARTED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicWrite PASSED

kafka.api.AuthorizerIntegrationTest > testAuthorization STARTED

kafka.api.AuthorizerIntegrationTest > testAuthorization PASSED

kafka.api.AuthorizerIntegrationTest > testUnauthorizedDeleteWithDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testUnauthorizedDeleteWithDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicAndGroupRead STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicAndGroupRead PASSED

kafka.api.AuthorizerIntegrationTest > testUnauthorizedDeleteWithoutDescribe 
STARTED

kafka.api.AuthorizerIntegrationTest > testUnauthorizedDeleteWithoutDescribe 
PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicWrite STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicWrite PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoGroupAccess STARTED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoGroupAccess PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoAccess STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoAccess PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoGroupAccess STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoGroupAccess PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithNoAccess STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithNoAccess PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithTopicAndGroupRead 
STARTED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithTopicAndGroupRead 
PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchTopicDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicAndGroupRead STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicAndGroupRead PASSED

kafka.api.AuthorizerIntegrationTest > 
testSimpleConsumeWithExplicitSeekAndNoGroupAccess STARTED

kafka.api.AuthorizerIntegrationTest > 
testSimpleConsumeWithExplicitSeekAndNoGroupAccess PASSED

kafka.api.AdminClientTest > testDescribeGroup STARTED

kafka.api.AdminClientTest > testDescribeGroup PASSED

kafka.api.AdminClientTest > testDescribeConsumerGroup STARTED

kafka.api.AdminClientTest > testDescribeConsumerGroup PASSED

kafka.api.AdminClientTest > testListGroups STARTED

kafka.api.AdminClientTest > testListGroups PASSED

kafka.api.AdminClientTest > testDescribeConsumerGroupForNonExistentGroup STARTED

kafka.api.AdminClientTest > testDescribeConsumerGroupForNonExistentGroup PASSED

kafka.api.ProducerBounceTest > testBrokerFailure STARTED

kafka.api.ProducerBounceTest > testBrokerFailure PASSED

kafka.api.ClientIdQuotaTest > testProducerConsumerOverrideUnthrottled STARTED

kafka.api.ClientIdQuotaTest > testProducerConsumerOverrideUnthrottled PASSED

kafka.api.ClientIdQuotaTest > testThrottledProducerConsumer STARTED

kafka.api.ClientIdQuotaTest > testThrottledProducerConsumer PASSED

kafka.api.ClientIdQuotaTest > testQuotaOverrideDelete STARTED

kafka.api.ClientIdQuotaTest > testQuotaOverrideDelete PASSED

kafka.api.test.ProducerCompressionTest > testCompression[0] STARTED

kafka.api.test.ProducerCompressionTest > testCompression[0] PASSED

kafka.api.test.ProducerCompressionTest > testCompression[1] STARTED

kafka.api.test.ProducerCompressionTest > testCompression[1] PASSED

kafka.api.test.ProducerCompressionTest > testCompression[2] STARTED

kafka.api.test.ProducerCompressionTest > testCompression[2] PASSED

kafka.api.test.ProducerCompressionTest > testCompression[3] STARTED

kafka.api.test.ProducerCompressionTest > testCompression[3] PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoConsumeAcl STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoConsumeAcl PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testProduceConsumeViaAssign 
STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testProduceConsumeViaAssign 
PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoProduceAcl STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoProduceAcl PASSED


[jira] [Updated] (KAFKA-4217) KStream.transform equivalent of flatMap

2016-09-28 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-4217:
---
Description: 
{{KStream.transform}} gives you access to state stores while allowing you to 
return zero or one transformed {{KeyValue}}.  Alas, it is unclear what method 
you should use if you want to access state stores and return zero or multiple 
{{KeyValue}}.  Presumably you can use {{transform}}, always return {{null}}, 
and use {{ProcessorContext.forward}} to emit {{KeyValues}}.

It may be good to introduce a {{transform}}-like {{flatMap}} equivalent, or 
allow store access from other {{KStream}} methods, such as {{flatMap}} itself.

  was:
{{KStream.transform}} gives you access to state stores while allowing you to 
return zero or one transformed {{KeyValue}}s.  Alas, it is unclear what method 
you should use if you want to access state stores and return zero or multiple 
{{KeyValue}}s.  Presumably you can use {{transform}}, always return {{null}}, 
and use {{ProcessorContext.forward}} to emit {{KeyValues}}s.

It may be good to introduce a {{transform}}-like {{flatMap}} equivalent, or 
allow store access from other {{KStream}} methods, such as {{flatMap}} itself.


> KStream.transform equivalent of flatMap
> ---
>
> Key: KAFKA-4217
> URL: https://issues.apache.org/jira/browse/KAFKA-4217
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.0.1
>Reporter: Elias Levy
>Assignee: Guozhang Wang
>
> {{KStream.transform}} gives you access to state stores while allowing you to 
> return zero or one transformed {{KeyValue}}.  Alas, it is unclear what method 
> you should use if you want to access state stores and return zero or multiple 
> {{KeyValue}}.  Presumably you can use {{transform}}, always return {{null}}, 
> and use {{ProcessorContext.forward}} to emit {{KeyValues}}.
> It may be good to introduce a {{transform}}-like {{flatMap}} equivalent, or 
> allow store access from other {{KStream}} methods, such as {{flatMap}} itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4114) Allow for different "auto.offset.reset" strategies for different input streams

2016-09-28 Thread Matthias J. Sax (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531701#comment-15531701
 ] 

Matthias J. Sax commented on KAFKA-4114:


No problem. Just shoot your questions. :)

> Allow for different "auto.offset.reset" strategies for different input streams
> --
>
> Key: KAFKA-4114
> URL: https://issues.apache.org/jira/browse/KAFKA-4114
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Bill Bejeck
>
> Today we only have one consumer config "offset.auto.reset" to control that 
> behavior, which means all streams are read either from "earliest" or "latest".
> However, it would be useful to improve this settings to allow users have 
> finer control over different input stream. For example, with two input 
> streams, one of them always reading from offset 0 upon (re)-starting, and the 
> other reading for log end offset.
> This JIRA requires to extend {{KStreamBuilder}} API for methods 
> {{.stream(...)}} and {{.table(...)}} to add a new parameter that indicate the 
> initial offset to be used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4114) Allow for different "auto.offset.reset" strategies for different input streams

2016-09-28 Thread Bill Bejeck (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531675#comment-15531675
 ] 

Bill Bejeck commented on KAFKA-4114:


No worries on the response. 

Thanks for clearing things up.  I knew the first case was covered by the 
StreamsConfigs settings but my confusion stemmed from when I took a look at the 
{{Consumer}} interface, I completely missed the {{seekToEnd()}} and 
{{seekToBegining()}} methods taking a  {{Collection}} 
parameter, thus enabling the 'fine grained' control per topic.  I think this is 
precisely the approach to take vs multiple consumers.  

>From now on I think I will institute a self-imposed 24 hour hold on all 
>questions.  

Thanks for the comments.

> Allow for different "auto.offset.reset" strategies for different input streams
> --
>
> Key: KAFKA-4114
> URL: https://issues.apache.org/jira/browse/KAFKA-4114
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Bill Bejeck
>
> Today we only have one consumer config "offset.auto.reset" to control that 
> behavior, which means all streams are read either from "earliest" or "latest".
> However, it would be useful to improve this settings to allow users have 
> finer control over different input stream. For example, with two input 
> streams, one of them always reading from offset 0 upon (re)-starting, and the 
> other reading for log end offset.
> This JIRA requires to extend {{KStreamBuilder}} API for methods 
> {{.stream(...)}} and {{.table(...)}} to add a new parameter that indicate the 
> initial offset to be used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #923

2016-09-28 Thread Apache Jenkins Server
See 

Changes:

[jay.kreps] MINOR: Improve introduction section in docs to better cover connect 
and

[ismael] KAFKA-4209; Reduce run time for quota integration tests

--
[...truncated 13784 lines...]
org.apache.kafka.streams.processor.internals.StreamThreadTest > testMaybeClean 
PASSED

org.apache.kafka.streams.processor.internals.PunctuationQueueTest > 
testPunctuationInterval STARTED

org.apache.kafka.streams.processor.internals.PunctuationQueueTest > 
testPunctuationInterval PASSED

org.apache.kafka.streams.processor.internals.RecordQueueTest > testTimeTracking 
STARTED

org.apache.kafka.streams.processor.internals.RecordQueueTest > testTimeTracking 
PASSED

org.apache.kafka.streams.processor.internals.RecordQueueTest > 
shouldThrowStreamsExceptionWhenKeyDeserializationFails STARTED

org.apache.kafka.streams.processor.internals.RecordQueueTest > 
shouldThrowStreamsExceptionWhenKeyDeserializationFails PASSED

org.apache.kafka.streams.processor.internals.RecordQueueTest > 
shouldThrowStreamsExceptionWhenValueDeserializationFails STARTED

org.apache.kafka.streams.processor.internals.RecordQueueTest > 
shouldThrowStreamsExceptionWhenValueDeserializationFails PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testRegisterNonPersistentStore STARTED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testRegisterNonPersistentStore PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
shouldRegisterStoreWithoutLoggingEnabledAndNotBackedByATopic STARTED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
shouldRegisterStoreWithoutLoggingEnabledAndNotBackedByATopic PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testGetStore STARTED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testGetStore PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testClose STARTED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testClose PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testChangeLogOffsets STARTED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testChangeLogOffsets PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testRegisterPersistentStore STARTED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testRegisterPersistentStore PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testNoTopic STARTED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testNoTopic PASSED

org.apache.kafka.streams.processor.internals.StreamTaskTest > 
shouldWrapKafkaExceptionsWithStreamsExceptionAndAddContextWhenPunctuating 
STARTED

org.apache.kafka.streams.processor.internals.StreamTaskTest > 
shouldWrapKafkaExceptionsWithStreamsExceptionAndAddContextWhenPunctuating PASSED

org.apache.kafka.streams.processor.internals.StreamTaskTest > testProcessOrder 
STARTED

org.apache.kafka.streams.processor.internals.StreamTaskTest > testProcessOrder 
PASSED

org.apache.kafka.streams.processor.internals.StreamTaskTest > 
shouldWrapKafkaExceptionsWithStreamsExceptionAndAddContext STARTED

org.apache.kafka.streams.processor.internals.StreamTaskTest > 
shouldWrapKafkaExceptionsWithStreamsExceptionAndAddContext PASSED

org.apache.kafka.streams.processor.internals.StreamTaskTest > testPauseResume 
STARTED

org.apache.kafka.streams.processor.internals.StreamTaskTest > testPauseResume 
PASSED

org.apache.kafka.streams.processor.internals.StreamTaskTest > 
testMaybePunctuate STARTED

org.apache.kafka.streams.processor.internals.StreamTaskTest > 
testMaybePunctuate PASSED

org.apache.kafka.streams.processor.internals.QuickUnionTest > testUnite STARTED

org.apache.kafka.streams.processor.internals.QuickUnionTest > testUnite PASSED

org.apache.kafka.streams.processor.internals.QuickUnionTest > testUniteMany 
STARTED

org.apache.kafka.streams.processor.internals.QuickUnionTest > testUniteMany 
PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullProcessorSupplier STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullProcessorSupplier PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotSetApplicationIdToNull STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotSetApplicationIdToNull PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testSourceTopics 
STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > testSourceTopics PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullNameWhenAddingSink STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullNameWhenAddingSink PASSED


[jira] [Commented] (KAFKA-4114) Allow for different "auto.offset.reset" strategies for different input streams

2016-09-28 Thread Matthias J. Sax (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531570#comment-15531570
 ] 

Matthias J. Sax commented on KAFKA-4114:


Sorry for the late reply. Case #1 is already supported right now. Because there 
is a single Consumer, it's reset policy is configured via eg 
{{StreamConfig#put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "latest");}} which 
applies for all stream/table that are consumed. This JIRA should enable case 
#2. However, I think we do not need multiple consumers. I would rather set 
default value for this consumer config as {{"none"}} and use 
{{seekToBeginning()}} and {{seekToEnd()}} according to the provided reset 
strategy per stream/table to the manually reset the offset instead of relying 
on consumer's internal reset. WDYT? Maybe [~guozhang] also want to comment?

> Allow for different "auto.offset.reset" strategies for different input streams
> --
>
> Key: KAFKA-4114
> URL: https://issues.apache.org/jira/browse/KAFKA-4114
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Bill Bejeck
>
> Today we only have one consumer config "offset.auto.reset" to control that 
> behavior, which means all streams are read either from "earliest" or "latest".
> However, it would be useful to improve this settings to allow users have 
> finer control over different input stream. For example, with two input 
> streams, one of them always reading from offset 0 upon (re)-starting, and the 
> other reading for log end offset.
> This JIRA requires to extend {{KStreamBuilder}} API for methods 
> {{.stream(...)}} and {{.table(...)}} to add a new parameter that indicate the 
> initial offset to be used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3478) Finer Stream Flow Control

2016-09-28 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-3478.

Resolution: Duplicate

> Finer Stream Flow Control
> -
>
> Key: KAFKA-3478
> URL: https://issues.apache.org/jira/browse/KAFKA-3478
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Guozhang Wang
>  Labels: user-experience
> Fix For: 0.10.2.0
>
>
> Today we have a event-time based flow control mechanism in order to 
> synchronize multiple input streams in a best effort manner:
> http://docs.confluent.io/3.0.0/streams/architecture.html#flow-control-with-timestamps
> However, there are some use cases where users would like to have finer 
> control of the input streams, for example, with two input streams, one of 
> them always reading from offset 0 upon (re)-starting, and the other reading 
> for log end offset.
> Today we only have one consumer config "offset.auto.reset" to control that 
> behavior, which means all streams are read either from "earliest" or "latest".
> We should consider how to improve this settings to allow users have finer 
> control over these frameworks.
> =
> A finer flow control could also be used to allow for populating a {{KTable}} 
> (with an "initial" state) before starting the actual processing (this feature 
> was ask for in the mailing list multiple times already). Even if it is quite 
> hard to define, *when* the initial populating phase should end, this might 
> still be useful. There would be the following possibilities:
>  1) an initial fixed time period for populating
>(it might be hard for a user to estimate the correct value)
>  2) an "idle" period, ie, if no update to a KTable for a certain time is
> done, we consider it as populated
>  3) a timestamp cut off point, ie, all records with an older timestamp
> belong to the initial populating phase
>  4) a throughput threshold, ie, if the populating frequency falls below
> the threshold, the KTable is considered "finished"
>  5) maybe something else ??
> The API might look something like this
> {noformat}
> KTable table = builder.table("topic", 1000); // populate the table without 
> reading any other topics until see one record with timestamp 1000.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1926: MINOR: Set JVM parameters for the Gradle Test exec...

2016-09-28 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1926


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-0.10.1-jdk7 #19

2016-09-28 Thread Apache Jenkins Server
See 

Changes:

[ismael] KAFKA-4227; Shutdown AdminManager when KafkaServer is shutdown

--
[...truncated 13748 lines...]
org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testAssignWithStandby PASSED

org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testAssignWithoutStandby STARTED

org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testAssignWithoutStandby PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingMultiplexingTopology STARTED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingMultiplexingTopology PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingStatefulTopology STARTED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingStatefulTopology PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingSimpleTopology STARTED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingSimpleTopology PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingSimpleMultiSourceTopology STARTED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingSimpleMultiSourceTopology PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testTopologyMetadata STARTED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testTopologyMetadata PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingMultiplexByNameTopology STARTED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingMultiplexByNameTopology PASSED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
testSpecificPartition STARTED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
testSpecificPartition PASSED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
shouldThrowStreamsExceptionAfterMaxAttempts STARTED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
shouldThrowStreamsExceptionAfterMaxAttempts PASSED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
shouldRetryWhenTimeoutExceptionOccursOnSend STARTED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
shouldRetryWhenTimeoutExceptionOccursOnSend PASSED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
testStreamPartitioner STARTED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
testStreamPartitioner PASSED

org.apache.kafka.streams.processor.internals.PunctuationQueueTest > 
testPunctuationInterval STARTED

org.apache.kafka.streams.processor.internals.PunctuationQueueTest > 
testPunctuationInterval PASSED

org.apache.kafka.streams.processor.internals.AbstractTaskTest > 
shouldThrowProcessorStateExceptionOnInitializeOffsetsWhenAuthorizationException 
STARTED

org.apache.kafka.streams.processor.internals.AbstractTaskTest > 
shouldThrowProcessorStateExceptionOnInitializeOffsetsWhenAuthorizationException 
PASSED

org.apache.kafka.streams.processor.internals.AbstractTaskTest > 
shouldThrowProcessorStateExceptionOnInitializeOffsetsWhenKafkaException STARTED

org.apache.kafka.streams.processor.internals.AbstractTaskTest > 
shouldThrowProcessorStateExceptionOnInitializeOffsetsWhenKafkaException PASSED

org.apache.kafka.streams.processor.internals.AbstractTaskTest > 
shouldThrowWakeupExceptionOnInitializeOffsetsWhenWakeupException STARTED

org.apache.kafka.streams.processor.internals.AbstractTaskTest > 
shouldThrowWakeupExceptionOnInitializeOffsetsWhenWakeupException PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldThrowExceptionIfApplicationServerConfigIsNotHostPortPair STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldThrowExceptionIfApplicationServerConfigIsNotHostPortPair PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldMapUserEndPointToTopicPartitions STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldMapUserEndPointToTopicPartitions PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldAddUserDefinedEndPointToSubscription STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldAddUserDefinedEndPointToSubscription PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithStandbyReplicas STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithStandbyReplicas PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithNewTasks STARTED


[jira] [Updated] (KAFKA-4225) Replication Quotas: Control Leader & Follower Limit Separately

2016-09-28 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-4225:
---
Fix Version/s: 0.10.1.0

> Replication Quotas: Control Leader & Follower Limit Separately
> --
>
> Key: KAFKA-4225
> URL: https://issues.apache.org/jira/browse/KAFKA-4225
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.0
>Reporter: Ben Stopford
>Assignee: Ben Stopford
> Fix For: 0.10.1.0
>
>
> As we split the throttled replicas into Leader and Follower configs, it makes 
> sense to also split the throttle limit:
> replication.quota.throttled.rate
>=>
> replication.leader.quota.throttled.rate
> & 
> replication.follower.quota.throttled.rate
> So that admins have fine grain control over both sides of the replication 
> process and the properties match for leader/follower applicability.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4194) Add more tests for KIP-79

2016-09-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531433#comment-15531433
 ] 

ASF GitHub Bot commented on KAFKA-4194:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1897


> Add more tests for KIP-79
> -
>
> Key: KAFKA-4194
> URL: https://issues.apache.org/jira/browse/KAFKA-4194
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.10.1.0
>
>
> This is a follow up ticket to add more tests for KIP-79. Including the 
> integration tests and clients test if necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4194) Add more tests for KIP-79

2016-09-28 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-4194:
---
   Resolution: Fixed
Fix Version/s: 0.10.1.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 1897
[https://github.com/apache/kafka/pull/1897]

> Add more tests for KIP-79
> -
>
> Key: KAFKA-4194
> URL: https://issues.apache.org/jira/browse/KAFKA-4194
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.10.1.0
>
>
> This is a follow up ticket to add more tests for KIP-79. Including the 
> integration tests and clients test if necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1897: KAFKA-4194 follow up patch for KIP-79

2016-09-28 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1897


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-4216) Replication Quotas: Control Leader & Follower Throttled Replicas Separately

2016-09-28 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-4216:
---
Priority: Critical  (was: Major)

> Replication Quotas: Control Leader & Follower Throttled Replicas Separately
> ---
>
> Key: KAFKA-4216
> URL: https://issues.apache.org/jira/browse/KAFKA-4216
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.0
>Reporter: Ben Stopford
>Assignee: Ben Stopford
>Priority: Critical
> Fix For: 0.10.1.0
>
>
> The current algorithm in kafka-reassign-partitions applies a throttle to all 
> moving replicas, be they leader-side or follower-side. 
> A preferable solution would be to change the throttled replica list to 
> specify whether the throttle applies to leader or follower. That way we can 
> ensure that the regular replication will not be throttled.  
> To do this we should change the way the throttled replica list is specified 
> so it is spread over two separate properties. One that corresponds to the 
> leader-side throttle, and the other that corresponds to the follower-side 
> throttle.
> quota.leader.replication.throttled.replicas = 
> [partId]:[replica],[partId]:[replica],[partId]:[replica] 
> quota.follower.replication.throttled.replicas = 
> [partId]:[replica],[partId]:[replica],[partId]:[replica] 
> Then, when applying the throttle, the leader quota can be applied to all 
> current replicas, and the follower quota can be applied only to the new 
> replicas we are creating as part of the rebalance. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3283) Remove beta from new consumer documentation

2016-09-28 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-3283.

Resolution: Fixed

> Remove beta from new consumer documentation
> ---
>
> Key: KAFKA-3283
> URL: https://issues.apache.org/jira/browse/KAFKA-3283
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.10.1.0
>
>
> Ideally, we would:
> * Remove the beta label
> * Filling any critical gaps in functionality
> * Update the documentation on the old consumers to recommend the new consumer 
> (without deprecating the old consumer, however)
> Current target is 0.10.1.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3283) Remove beta from new consumer documentation

2016-09-28 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3283:
---
Status: Reopened  (was: Closed)

> Remove beta from new consumer documentation
> ---
>
> Key: KAFKA-3283
> URL: https://issues.apache.org/jira/browse/KAFKA-3283
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.10.1.0
>
>
> Ideally, we would:
> * Remove the beta label
> * Filling any critical gaps in functionality
> * Update the documentation on the old consumers to recommend the new consumer 
> (without deprecating the old consumer, however)
> Current target is 0.10.1.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Website Update

2016-09-28 Thread Gwen Shapira
Hi Kafka Fans,

We just pushed an update to the website. We changed the byline from
"Kafka is a pub-sub system rethought as distributed log" to "Kafka is
a stream platform" - because this reflects the modern use of Kafka a
lot better and stream processing systems are the use-cases we optimize
for. We also added amazing diagram to the front page.

And by "we" I mean Jay Kreps did all that :) I'm kinda proud that
Kafka is a project where the docs are so valued that Kafka founders
work on the docs. It is what makes Kafka one of the best Apache
projects out there.

Comments, improvements, and contributions are welcome and encouraged.

-- 
Gwen Shapira


Including streams and connect better in site docs

2016-09-28 Thread Jay Kreps
Hey guys,

Gwen and I took a stab at better integrating Connect and Streams in the
Kafka site...they were largely absent in the api section, intro, home page,
etc. Take a look and see what you think. Major changes are the following:

   - Changed tag line from "a distributed messaging system" to "a
   distributed streaming platform" since kafka wasn't ever much of a messaging
   system and is definitely a lot more than that now.
   - Rewrote the introduction page
   - Stopped linking from the nav into sections in the doc

I think there is a lot more that could be done there--the design section
still dates largely from the 0.6 days.

The site is still super ugly, especially the graphic on the home page. As a
separate thing we'll start a discussion around restyling to improve the
look of things.

-Jay


[GitHub] kafka-site pull request #19: Try to include streams and connect better. Remo...

2016-09-28 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka-site/pull/19


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4228) Sender thread death leaves KafkaProducer in a bad state

2016-09-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531356#comment-15531356
 ] 

ASF GitHub Bot commented on KAFKA-4228:
---

GitHub user radai-rosenblatt opened a pull request:

https://github.com/apache/kafka/pull/1930

KAFKA-4228 - make producer close on sender thread death, make consumer 
shutdown on failure to rebalance, and make MM die on any of the above.

the JIRA issue (https://issues.apache.org/jira/browse/KAFKA-4228) details a 
cascade of failures that resulted in an entire mirror maker cluster stalling 
due to an OOM death on one mm instance.

this patch makes producers and consumers close themselves on the errors 
encountered, and mm to shut down if anything happens to producers or consumers.

Signed-off-by: radai-rosenblatt 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/radai-rosenblatt/kafka honorable-death

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1930.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1930


commit efca6e8dfa65acb5fbb1bc1801fda151b08f4f81
Author: radai-rosenblatt 
Date:   2016-09-29T00:16:16Z

KAFKA-4228 - make producer close on sender thread death, make consumer 
shutdown on failure to rebalance, and make MM die on any of the above.

Signed-off-by: radai-rosenblatt 




> Sender thread death leaves KafkaProducer in a bad state
> ---
>
> Key: KAFKA-4228
> URL: https://issues.apache.org/jira/browse/KAFKA-4228
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.10.0.1
>Reporter: radai rosenblatt
>
> a KafkaProducer's Sender thread may die:
> {noformat}
> 2016/09/28 00:28:01.065 ERROR [KafkaThread] [kafka-producer-network-thread | 
> mm_ei-lca1_uniform] [kafka-mirror-maker] [] Uncaught exception in 
> kafka-producer-network-thread | mm_ei-lca1_uni
> java.lang.OutOfMemoryError: Java heap space
>at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57) ~[?:1.8.0_40]
>at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) ~[?:1.8.0_40]
>at 
> org.apache.kafka.common.requests.RequestSend.serialize(RequestSend.java:35) 
> ~[kafka-clients-0.9.0.666.jar:?]
>at 
> org.apache.kafka.common.requests.RequestSend.(RequestSend.java:29) 
> ~[kafka-clients-0.9.0.666.jar:?]
>at 
> org.apache.kafka.clients.producer.internals.Sender.produceRequest(Sender.java:355)
>  ~[kafka-clients-0.9.0.666.jar:?]
>at 
> org.apache.kafka.clients.producer.internals.Sender.createProduceRequests(Sender.java:337)
>  ~[kafka-clients-0.9.0.666.jar:?]
>at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:211) 
> ~[kafka-clients-0.9.0.666.jar:?]
>at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:134) 
> ~[kafka-clients-0.9.0.666.jar:?]
>at java.lang.Thread.run(Thread.java:745) [?:1.8.0_40]
> {noformat}
> which leaves the producer in a bad state. in this state, a call to flush(), 
> for example, will hang indefinitely as the sender thread is not around to 
> flush batches but theyve not been aborted.
> even worse, this can happen in MirrorMaker just before a rebalance, at which 
> point MM will just block indefinitely during a rebalance (in 
> beforeReleasingPartitions()).
> a rebalance participant hung in such a way will cause rebalance to fail for 
> the rest of the participants, at which point 
> ZKRebalancerListener.watcherExecutorThread() dies to an exception (cannot 
> rebalance after X attempts) but the consumer that ran the thread will remain 
> live. the end result is a bunch of zombie mirror makers and orphan topic 
> partitions.
> a dead sender thread should result in closing the producer.
> a consumer failing to rebalance should shut down.
> any issue with the producer or consumer should cause mirror-maker death.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1930: KAFKA-4228 - make producer close on sender thread ...

2016-09-28 Thread radai-rosenblatt
GitHub user radai-rosenblatt opened a pull request:

https://github.com/apache/kafka/pull/1930

KAFKA-4228 - make producer close on sender thread death, make consumer 
shutdown on failure to rebalance, and make MM die on any of the above.

the JIRA issue (https://issues.apache.org/jira/browse/KAFKA-4228) details a 
cascade of failures that resulted in an entire mirror maker cluster stalling 
due to an OOM death on one mm instance.

this patch makes producers and consumers close themselves on the errors 
encountered, and mm to shut down if anything happens to producers or consumers.

Signed-off-by: radai-rosenblatt 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/radai-rosenblatt/kafka honorable-death

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1930.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1930


commit efca6e8dfa65acb5fbb1bc1801fda151b08f4f81
Author: radai-rosenblatt 
Date:   2016-09-29T00:16:16Z

KAFKA-4228 - make producer close on sender thread death, make consumer 
shutdown on failure to rebalance, and make MM die on any of the above.

Signed-off-by: radai-rosenblatt 




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Jenkins build is back to normal : kafka-trunk-jdk7 #1579

2016-09-28 Thread Apache Jenkins Server
See 



[jira] [Commented] (KAFKA-4212) Add a key-value store that is a TTL persistent cache

2016-09-28 Thread Elias Levy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531353#comment-15531353
 ] 

Elias Levy commented on KAFKA-4212:
---

I am using the {{KStream.transform}} API, the {{Transform}} interface, and 
making use of state stores that I create.  So very close to using the low level 
{{Processor}} API.  But even so, the logic is essentially the same of the high 
level join, using a sliding window, except that one of the streams has a 
composite key.

I don't see what hopping windows have to do with the use case.

> Add a key-value store that is a TTL persistent cache
> 
>
> Key: KAFKA-4212
> URL: https://issues.apache.org/jira/browse/KAFKA-4212
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.0.1
>Reporter: Elias Levy
>Assignee: Guozhang Wang
>
> Some jobs needs to maintain as state a large set of key-values for some 
> period of time.  I.e. they need to maintain a TTL cache of values potentially 
> larger than memory. 
> Currently Kafka Streams provides non-windowed and windowed key-value stores.  
> Neither is an exact fit to this use case.  
> The {{RocksDBStore}}, a {{KeyValueStore}}, stores one value per key as 
> required, but does not support expiration.  The TTL option of RocksDB is 
> explicitly not used.
> The {{RocksDBWindowsStore}}, a {{WindowsStore}}, can expire items via segment 
> dropping, but it stores multiple items per key, based on their timestamp.  
> But this store can be repurposed as a cache by fetching the items in reverse 
> chronological order and returning the first item found.
> KAFKA-2594 introduced a fixed-capacity in-memory LRU caching store, but here 
> we desire a variable-capacity memory-overflowing TTL caching store.
> Although {{RocksDBWindowsStore}} can be repurposed as a cache, it would be 
> useful to have an official and proper TTL cache API and implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4228) Sender thread death leaves KafkaProducer in a bad state

2016-09-28 Thread radai rosenblatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

radai rosenblatt updated KAFKA-4228:

Description: 
a KafkaProducer's Sender thread may die:
{noformat}
2016/09/28 00:28:01.065 ERROR [KafkaThread] [kafka-producer-network-thread | 
mm_ei-lca1_uniform] [kafka-mirror-maker] [] Uncaught exception in 
kafka-producer-network-thread | mm_ei-lca1_uni
java.lang.OutOfMemoryError: Java heap space
   at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57) ~[?:1.8.0_40]
   at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) ~[?:1.8.0_40]
   at 
org.apache.kafka.common.requests.RequestSend.serialize(RequestSend.java:35) 
~[kafka-clients-0.9.0.666.jar:?]
   at 
org.apache.kafka.common.requests.RequestSend.(RequestSend.java:29) 
~[kafka-clients-0.9.0.666.jar:?]
   at 
org.apache.kafka.clients.producer.internals.Sender.produceRequest(Sender.java:355)
 ~[kafka-clients-0.9.0.666.jar:?]
   at 
org.apache.kafka.clients.producer.internals.Sender.createProduceRequests(Sender.java:337)
 ~[kafka-clients-0.9.0.666.jar:?]
   at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:211) 
~[kafka-clients-0.9.0.666.jar:?]
   at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:134) 
~[kafka-clients-0.9.0.666.jar:?]
   at java.lang.Thread.run(Thread.java:745) [?:1.8.0_40]
{noformat}

which leaves the producer in a bad state. in this state, a call to flush(), for 
example, will hang indefinitely as the sender thread is not around to flush 
batches but theyve not been aborted.

even worse, this can happen in MirrorMaker just before a rebalance, at which 
point MM will just block indefinitely during a rebalance (in 
beforeReleasingPartitions()).

a rebalance participant hung in such a way will cause rebalance to fail for the 
rest of the participants, at which point 
ZKRebalancerListener.watcherExecutorThread() dies to an exception (cannot 
rebalance after X attempts) but the consumer that ran the thread will remain 
live. the end result is a bunch of zombie mirror makers and orphan topic 
partitions.

a dead sender thread should result in closing the producer.
a consumer failing to rebalance should shut down.
any issue with the producer or consumer should cause mirror-maker death.


  was:
a KafkaProducer's Sender thread may die:
{noformat}
2016/09/28 00:28:01.065 ERROR [KafkaThread] [kafka-producer-network-thread | 
mm_ei-lca1_uniform] [kafka-mirror-maker] [] Uncaught exception in 
kafka-producer-network-thread | mm_ei-lca1_uni
java.lang.OutOfMemoryError: Java heap space
   at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57) ~[?:1.8.0_40]
   at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) ~[?:1.8.0_40]
   at 
org.apache.kafka.common.requests.RequestSend.serialize(RequestSend.java:35) 
~[kafka-clients-0.9.0.666.jar:?]
   at 
org.apache.kafka.common.requests.RequestSend.(RequestSend.java:29) 
~[kafka-clients-0.9.0.666.jar:?]
   at 
org.apache.kafka.clients.producer.internals.Sender.produceRequest(Sender.java:355)
 ~[kafka-clients-0.9.0.666.jar:?]
   at 
org.apache.kafka.clients.producer.internals.Sender.createProduceRequests(Sender.java:337)
 ~[kafka-clients-0.9.0.666.jar:?]
   at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:211) 
~[kafka-clients-0.9.0.666.jar:?]
   at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:134) 
~[kafka-clients-0.9.0.666.jar:?]
   at java.lang.Thread.run(Thread.java:745) [?:1.8.0_40]
{noformat}

which leaves the producer in a bad state. in this state, a call to flush(), for 
example, will hang indefinitely as the sender thread is not around to flush 
batches but theyve not been aborted.

even worse, this can happen in MirrorMaker just before a rebalance, at which 
point MM will just block indefinitely during a rebalance and the end result is 
unowned topic partitions.

a dead sender thread should result in closing the producer, and a closed 
producer should result in MirrorMaker death.



> Sender thread death leaves KafkaProducer in a bad state
> ---
>
> Key: KAFKA-4228
> URL: https://issues.apache.org/jira/browse/KAFKA-4228
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.10.0.1
>Reporter: radai rosenblatt
>
> a KafkaProducer's Sender thread may die:
> {noformat}
> 2016/09/28 00:28:01.065 ERROR [KafkaThread] [kafka-producer-network-thread | 
> mm_ei-lca1_uniform] [kafka-mirror-maker] [] Uncaught exception in 
> kafka-producer-network-thread | mm_ei-lca1_uni
> java.lang.OutOfMemoryError: Java heap space
>at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57) ~[?:1.8.0_40]
>at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) ~[?:1.8.0_40]
>at 
> 

[jira] [Commented] (KAFKA-4209) Reduce time taken to run quota integration tests

2016-09-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531334#comment-15531334
 ] 

ASF GitHub Bot commented on KAFKA-4209:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1902


> Reduce time taken to run quota integration tests
> 
>
> Key: KAFKA-4209
> URL: https://issues.apache.org/jira/browse/KAFKA-4209
> Project: Kafka
>  Issue Type: Test
>  Components: unit tests
>Affects Versions: 0.10.1.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.10.2.0
>
>
> Quota integration tests take over a minute to run each class. Since there are 
> three versions of the test now for client-id, user and , the 
> total time for these tests has a big impact on the total test run time. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4209) Reduce time taken to run quota integration tests

2016-09-28 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-4209:
---
   Resolution: Fixed
Fix Version/s: 0.10.2.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 1902
[https://github.com/apache/kafka/pull/1902]

> Reduce time taken to run quota integration tests
> 
>
> Key: KAFKA-4209
> URL: https://issues.apache.org/jira/browse/KAFKA-4209
> Project: Kafka
>  Issue Type: Test
>  Components: unit tests
>Affects Versions: 0.10.1.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.10.2.0
>
>
> Quota integration tests take over a minute to run each class. Since there are 
> three versions of the test now for client-id, user and , the 
> total time for these tests has a big impact on the total test run time. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1902: KAFKA-4209: Reduce run time for quota integration ...

2016-09-28 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1902


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-4228) Sender thread death leaves KafkaProducer in a bad state

2016-09-28 Thread radai rosenblatt (JIRA)
radai rosenblatt created KAFKA-4228:
---

 Summary: Sender thread death leaves KafkaProducer in a bad state
 Key: KAFKA-4228
 URL: https://issues.apache.org/jira/browse/KAFKA-4228
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 0.10.0.1
Reporter: radai rosenblatt


a KafkaProducer's Sender thread may die:
{noformat}
2016/09/28 00:28:01.065 ERROR [KafkaThread] [kafka-producer-network-thread | 
mm_ei-lca1_uniform] [kafka-mirror-maker] [] Uncaught exception in 
kafka-producer-network-thread | mm_ei-lca1_uni
java.lang.OutOfMemoryError: Java heap space
   at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57) ~[?:1.8.0_40]
   at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) ~[?:1.8.0_40]
   at 
org.apache.kafka.common.requests.RequestSend.serialize(RequestSend.java:35) 
~[kafka-clients-0.9.0.666.jar:?]
   at 
org.apache.kafka.common.requests.RequestSend.(RequestSend.java:29) 
~[kafka-clients-0.9.0.666.jar:?]
   at 
org.apache.kafka.clients.producer.internals.Sender.produceRequest(Sender.java:355)
 ~[kafka-clients-0.9.0.666.jar:?]
   at 
org.apache.kafka.clients.producer.internals.Sender.createProduceRequests(Sender.java:337)
 ~[kafka-clients-0.9.0.666.jar:?]
   at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:211) 
~[kafka-clients-0.9.0.666.jar:?]
   at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:134) 
~[kafka-clients-0.9.0.666.jar:?]
   at java.lang.Thread.run(Thread.java:745) [?:1.8.0_40]
{noformat}

which leaves the producer in a bad state. in this state, a call to flush(), for 
example, will hang indefinitely as the sender thread is not around to flush 
batches but theyve not been aborted.

even worse, this can happen in MirrorMaker just before a rebalance, at which 
point MM will just block indefinitely during a rebalance and the end result is 
unowned topic partitions.

a dead sender thread should result in closing the producer, and a closed 
producer should result in MirrorMaker death.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (KAFKA-4184) Test failure in ReplicationQuotasTest.shouldBootstrapTwoBrokersWithFollowerThrottle

2016-09-28 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang reopened KAFKA-4184:
--

This test failed again: 
https://builds.apache.org/job/kafka-trunk-git-pr-jdk7/5905/testReport/junit/kafka.server/ReplicationQuotasTest/shouldBootstrapTwoBrokersWithLeaderThrottle/

{code}
Stacktrace

java.lang.AssertionError: Offsets did not match for partition 6 on broker 106
at org.junit.Assert.fail(Assert.java:88)
at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:752)
at 
kafka.server.ReplicationQuotasTest.kafka$server$ReplicationQuotasTest$$waitForOffsetsToMatch(ReplicationQuotasTest.scala:217)
at 
kafka.server.ReplicationQuotasTest$$anonfun$shouldMatchQuotaReplicatingThroughAnAsymmetricTopology$7.apply$mcZI$sp(ReplicationQuotasTest.scala:144)
at 
kafka.server.ReplicationQuotasTest$$anonfun$shouldMatchQuotaReplicatingThroughAnAsymmetricTopology$7.apply(ReplicationQuotasTest.scala:144)
at 
kafka.server.ReplicationQuotasTest$$anonfun$shouldMatchQuotaReplicatingThroughAnAsymmetricTopology$7.apply(ReplicationQuotasTest.scala:144)
at scala.collection.immutable.Range.foreach(Range.scala:160)
at 
kafka.server.ReplicationQuotasTest.shouldMatchQuotaReplicatingThroughAnAsymmetricTopology(ReplicationQuotasTest.scala:144)
at 
kafka.server.ReplicationQuotasTest.shouldBootstrapTwoBrokersWithLeaderThrottle(ReplicationQuotasTest.scala:68)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:377)
at 

[jira] [Commented] (KAFKA-4212) Add a key-value store that is a TTL persistent cache

2016-09-28 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531263#comment-15531263
 ] 

Guozhang Wang commented on KAFKA-4212:
--

Ah I thought you are trying to apply the lower-level Processor APIs rather than 
the high-level DSL to go around this issue for now. In that case you can 
implement your own "join" operations at will with the current provided set of 
store functionalities.

With the current DSL I agree that it cannot fully support this task yet.

> Add a key-value store that is a TTL persistent cache
> 
>
> Key: KAFKA-4212
> URL: https://issues.apache.org/jira/browse/KAFKA-4212
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.0.1
>Reporter: Elias Levy
>Assignee: Guozhang Wang
>
> Some jobs needs to maintain as state a large set of key-values for some 
> period of time.  I.e. they need to maintain a TTL cache of values potentially 
> larger than memory. 
> Currently Kafka Streams provides non-windowed and windowed key-value stores.  
> Neither is an exact fit to this use case.  
> The {{RocksDBStore}}, a {{KeyValueStore}}, stores one value per key as 
> required, but does not support expiration.  The TTL option of RocksDB is 
> explicitly not used.
> The {{RocksDBWindowsStore}}, a {{WindowsStore}}, can expire items via segment 
> dropping, but it stores multiple items per key, based on their timestamp.  
> But this store can be repurposed as a cache by fetching the items in reverse 
> chronological order and returning the first item found.
> KAFKA-2594 introduced a fixed-capacity in-memory LRU caching store, but here 
> we desire a variable-capacity memory-overflowing TTL caching store.
> Although {{RocksDBWindowsStore}} can be repurposed as a cache, it would be 
> useful to have an official and proper TTL cache API and implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka-site pull request #19: Try to include streams and connect better. Remo...

2016-09-28 Thread jkreps
GitHub user jkreps opened a pull request:

https://github.com/apache/kafka-site/pull/19

Try to include streams and connect better. Removing messaging centric…

… terminology. Rewrite introduction page. Make a linked doc sections 
stand alone.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jkreps/kafka-site asf-site

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka-site/pull/19.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #19


commit 2cf87df9e320ebb14edaab943b2f8cfdb368880f
Author: Jay Kreps 
Date:   2016-09-28T23:28:48Z

Try to include streams and connect better. Removing messaging centric 
terminology. Rewrite introduction page. Make a linked doc sections stand alone.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4212) Add a key-value store that is a TTL persistent cache

2016-09-28 Thread Elias Levy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531116#comment-15531116
 ] 

Elias Levy commented on KAFKA-4212:
---

But joins are not performed on hopping windows, they are performed on a single 
sliding window.  


> Add a key-value store that is a TTL persistent cache
> 
>
> Key: KAFKA-4212
> URL: https://issues.apache.org/jira/browse/KAFKA-4212
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.0.1
>Reporter: Elias Levy
>Assignee: Guozhang Wang
>
> Some jobs needs to maintain as state a large set of key-values for some 
> period of time.  I.e. they need to maintain a TTL cache of values potentially 
> larger than memory. 
> Currently Kafka Streams provides non-windowed and windowed key-value stores.  
> Neither is an exact fit to this use case.  
> The {{RocksDBStore}}, a {{KeyValueStore}}, stores one value per key as 
> required, but does not support expiration.  The TTL option of RocksDB is 
> explicitly not used.
> The {{RocksDBWindowsStore}}, a {{WindowsStore}}, can expire items via segment 
> dropping, but it stores multiple items per key, based on their timestamp.  
> But this store can be repurposed as a cache by fetching the items in reverse 
> chronological order and returning the first item found.
> KAFKA-2594 introduced a fixed-capacity in-memory LRU caching store, but here 
> we desire a variable-capacity memory-overflowing TTL caching store.
> Although {{RocksDBWindowsStore}} can be repurposed as a cache, it would be 
> useful to have an official and proper TTL cache API and implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4212) Add a key-value store that is a TTL persistent cache

2016-09-28 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531077#comment-15531077
 ] 

Guozhang Wang commented on KAFKA-4212:
--

When creating the windowed store with

{code}
Stores.withKeys(...)
  .withValues(...)
  .persistent()
  .windowed(...)
{code}

In {{windowed}} users can specify the window size and whether 
"retainDuplicates"; if "retainDuplicates" is set to false, and then you put 
records into the window store with the floored timestamp by the window size 
using {{put(key, value, timestamp)}}, then records with the same key and the 
same floored timestamp (i.e. they falls into the same window) will overwrite 
old ones with the same key, hence you do not need to reverse scanning in order 
to find the latest value.

> Add a key-value store that is a TTL persistent cache
> 
>
> Key: KAFKA-4212
> URL: https://issues.apache.org/jira/browse/KAFKA-4212
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.0.1
>Reporter: Elias Levy
>Assignee: Guozhang Wang
>
> Some jobs needs to maintain as state a large set of key-values for some 
> period of time.  I.e. they need to maintain a TTL cache of values potentially 
> larger than memory. 
> Currently Kafka Streams provides non-windowed and windowed key-value stores.  
> Neither is an exact fit to this use case.  
> The {{RocksDBStore}}, a {{KeyValueStore}}, stores one value per key as 
> required, but does not support expiration.  The TTL option of RocksDB is 
> explicitly not used.
> The {{RocksDBWindowsStore}}, a {{WindowsStore}}, can expire items via segment 
> dropping, but it stores multiple items per key, based on their timestamp.  
> But this store can be repurposed as a cache by fetching the items in reverse 
> chronological order and returning the first item found.
> KAFKA-2594 introduced a fixed-capacity in-memory LRU caching store, but here 
> we desire a variable-capacity memory-overflowing TTL caching store.
> Although {{RocksDBWindowsStore}} can be repurposed as a cache, it would be 
> useful to have an official and proper TTL cache API and implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4226) Surprising NoOffsetForPartitionException for paused partition with no reset policy

2016-09-28 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531071#comment-15531071
 ] 

Vahid Hashemian commented on KAFKA-4226:


Thanks [~hachikuji]. It fails as the unit test as you suggested, but for some 
reason I cannot get it to throw the exception if I run it separately like the 
code I pasted earlier (even in a clean cluster).

> Surprising NoOffsetForPartitionException for paused partition with no reset 
> policy
> --
>
> Key: KAFKA-4226
> URL: https://issues.apache.org/jira/browse/KAFKA-4226
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Reporter: Jason Gustafson
>Priority: Minor
>
> If the user has no reset policy defined (i.e. auto.offset.reset is "none"), 
> then the consumer raises {{NoOffsetForPartitionException}} if it ever 
> encounters a situation in which it needs to reset the offset for that 
> partition. For example, this can happen when the consumer needs to set the 
> partition's initial position or if it encounters an out of range offset error 
> from a fetch. This option is helpful when you need direct control over the 
> behavior in these cases.
> I was a little surprised that the consumer currently raises this exception 
> even if the partition is in a paused state. So the following code raises the 
> exception:
> {code}
> consumerConfig.setProperty(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "none")
> val consumer = new KafkaConsumer(consumerConfig)
> consumer.assign(singleton(partition))
> consumer.pause(singleton(partition))
> consumer.poll(0)
> {code}
> Since we do not send any fetches when the partition is paused, it seems like 
> we could delay setting the offset for the partition until it is resumed. In 
> that case, the poll(0) would not raise in the example above. This would be a 
> relatively easy change, but I'm not sure if there are any downsides.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-4226) Surprising NoOffsetForPartitionException for paused partition with no reset policy

2016-09-28 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531071#comment-15531071
 ] 

Vahid Hashemian edited comment on KAFKA-4226 at 9/28/16 10:30 PM:
--

Thanks [~hachikuji]. It fails in the unit test as you suggested, but for some 
reason I cannot get it to throw the exception if I run it separately like the 
code I pasted earlier (even in a clean cluster).


was (Author: vahid):
Thanks [~hachikuji]. It fails as the unit test as you suggested, but for some 
reason I cannot get it to throw the exception if I run it separately like the 
code I pasted earlier (even in a clean cluster).

> Surprising NoOffsetForPartitionException for paused partition with no reset 
> policy
> --
>
> Key: KAFKA-4226
> URL: https://issues.apache.org/jira/browse/KAFKA-4226
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Reporter: Jason Gustafson
>Priority: Minor
>
> If the user has no reset policy defined (i.e. auto.offset.reset is "none"), 
> then the consumer raises {{NoOffsetForPartitionException}} if it ever 
> encounters a situation in which it needs to reset the offset for that 
> partition. For example, this can happen when the consumer needs to set the 
> partition's initial position or if it encounters an out of range offset error 
> from a fetch. This option is helpful when you need direct control over the 
> behavior in these cases.
> I was a little surprised that the consumer currently raises this exception 
> even if the partition is in a paused state. So the following code raises the 
> exception:
> {code}
> consumerConfig.setProperty(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "none")
> val consumer = new KafkaConsumer(consumerConfig)
> consumer.assign(singleton(partition))
> consumer.pause(singleton(partition))
> consumer.poll(0)
> {code}
> Since we do not send any fetches when the partition is paused, it seems like 
> we could delay setting the offset for the partition until it is resumed. In 
> that case, the poll(0) would not raise in the example above. This would be a 
> relatively easy change, but I'm not sure if there are any downsides.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-0.10.1-jdk7 #18

2016-09-28 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-3708: Better exception handling in Kafka Streams

--
[...truncated 11894 lines...]

org.apache.kafka.common.security.ssl.SslFactoryTest > testClientMode PASSED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testSslFactoryConfiguration STARTED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testSslFactoryConfiguration PASSED

org.apache.kafka.common.security.kerberos.KerberosNameTest > testParse STARTED

org.apache.kafka.common.security.kerberos.KerberosNameTest > testParse PASSED
:clients:determineCommitId UP-TO-DATE
:clients:createVersionFile
:clients:jar UP-TO-DATE
:core:compileJava UP-TO-DATE
:core:compileScala
:79:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^
:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,

  ^
:37:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 expireTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP) {

  ^
:523:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
  if (offsetAndMetadata.expireTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

^
:311:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
  if (partitionData.timestamp == 
OffsetCommitRequest.DEFAULT_TIMESTAMP)
 ^
:93:
 class ProducerConfig in package producer is deprecated: This class has been 
deprecated and will be removed in a future release. Please use 
org.apache.kafka.clients.producer.ProducerConfig instead.
val producerConfig = new ProducerConfig(props)
 ^
:94:
 method fetchTopicMetadata in object ClientUtils is deprecated: This method has 
been deprecated and will be removed in a future release.
fetchTopicMetadata(topics, brokers, producerConfig, correlationId)
^
:393:
 constructor UpdateMetadataRequest in class UpdateMetadataRequest is 
deprecated: see corresponding Javadoc for more information.
new UpdateMetadataRequest(controllerId, controllerEpoch, 
liveBrokers.asJava, partitionStates.asJava)
^
:191:
 object ProducerRequestStatsRegistry in package producer is deprecated: This 
object has been deprecated and will be removed in a future release.
ProducerRequestStatsRegistry.removeProducerRequestStats(clientId)
^
:129:
 method readFromReadableChannel in class NetworkReceive is deprecated: see 
corresponding Javadoc for more information.
  response.readFromReadableChannel(channel)
   ^
:311:
 value timestamp in class PartitionData is deprecated: see corresponding 
Javadoc for more information.
  if (partitionData.timestamp == 
OffsetCommitRequest.DEFAULT_TIMESTAMP)
   

Build failed in Jenkins: kafka-trunk-jdk7 #1578

2016-09-28 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-3708: Better exception handling in Kafka Streams

--
[...truncated 13758 lines...]
org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutIfAbsent PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testRestoreWithDefaultSerdes STARTED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testRestoreWithDefaultSerdes PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testRestore STARTED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testRestore PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutGetRange STARTED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutGetRange PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutGetRangeWithDefaultSerdes STARTED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutGetRangeWithDefaultSerdes PASSED

org.apache.kafka.streams.state.internals.DelegatingPeekingKeyValueIteratorTest 
> shouldPeekNext STARTED

org.apache.kafka.streams.state.internals.DelegatingPeekingKeyValueIteratorTest 
> shouldPeekNext PASSED

org.apache.kafka.streams.state.internals.DelegatingPeekingKeyValueIteratorTest 
> shouldPeekAndIterate STARTED

org.apache.kafka.streams.state.internals.DelegatingPeekingKeyValueIteratorTest 
> shouldPeekAndIterate PASSED

org.apache.kafka.streams.state.internals.DelegatingPeekingKeyValueIteratorTest 
> shouldThrowNoSuchElementWhenNoMoreItemsLeftAndPeekNextCalled STARTED

org.apache.kafka.streams.state.internals.DelegatingPeekingKeyValueIteratorTest 
> shouldThrowNoSuchElementWhenNoMoreItemsLeftAndPeekNextCalled PASSED

org.apache.kafka.streams.state.internals.DelegatingPeekingKeyValueIteratorTest 
> shouldThrowNoSuchElementWhenNoMoreItemsLeftAndNextCalled STARTED

org.apache.kafka.streams.state.internals.DelegatingPeekingKeyValueIteratorTest 
> shouldThrowNoSuchElementWhenNoMoreItemsLeftAndNextCalled PASSED

org.apache.kafka.streams.state.internals.MergedSortedCacheWindowStoreIteratorTest
 > shouldIterateOverValueFromBothIterators STARTED

org.apache.kafka.streams.state.internals.MergedSortedCacheWindowStoreIteratorTest
 > shouldIterateOverValueFromBothIterators PASSED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldReturnEmptyListIfNoStoresFoundWithName STARTED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldReturnEmptyListIfNoStoresFoundWithName PASSED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldThrowInvalidStoreExceptionIfKVStoreClosed STARTED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldThrowInvalidStoreExceptionIfKVStoreClosed PASSED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldThrowInvalidStoreExceptionIfNotAllStoresAvailable STARTED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldThrowInvalidStoreExceptionIfNotAllStoresAvailable PASSED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldFindWindowStores STARTED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldFindWindowStores PASSED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldReturnEmptyListIfStoreExistsButIsNotOfTypeValueStore STARTED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldReturnEmptyListIfStoreExistsButIsNotOfTypeValueStore PASSED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldFindKeyValueStores STARTED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldFindKeyValueStores PASSED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldThrowInvalidStoreExceptionIfWindowStoreClosed STARTED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldThrowInvalidStoreExceptionIfWindowStoreClosed PASSED

org.apache.kafka.streams.state.StoresTest > 
shouldCreateInMemoryStoreSupplierWithLoggedConfig STARTED

org.apache.kafka.streams.state.StoresTest > 
shouldCreateInMemoryStoreSupplierWithLoggedConfig PASSED

org.apache.kafka.streams.state.StoresTest > 
shouldCreatePersistenStoreSupplierNotLogged STARTED

org.apache.kafka.streams.state.StoresTest > 
shouldCreatePersistenStoreSupplierNotLogged PASSED

org.apache.kafka.streams.state.StoresTest > 
shouldCreatePersistenStoreSupplierWithLoggedConfig STARTED

org.apache.kafka.streams.state.StoresTest > 
shouldCreatePersistenStoreSupplierWithLoggedConfig PASSED

org.apache.kafka.streams.state.StoresTest > 
shouldCreateInMemoryStoreSupplierNotLogged STARTED


Build failed in Jenkins: kafka-trunk-jdk8 #922

2016-09-28 Thread Apache Jenkins Server
See 

Changes:

[ismael] KAFKA-4227; Shutdown AdminManager when KafkaServer is shutdown

--
[...truncated 11936 lines...]
:37:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 expireTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP) {

  ^
:523:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
  if (offsetAndMetadata.expireTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

^
:311:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
  if (partitionData.timestamp == 
OffsetCommitRequest.DEFAULT_TIMESTAMP)
 ^
:93:
 class ProducerConfig in package producer is deprecated: This class has been 
deprecated and will be removed in a future release. Please use 
org.apache.kafka.clients.producer.ProducerConfig instead.
val producerConfig = new ProducerConfig(props)
 ^
:94:
 method fetchTopicMetadata in object ClientUtils is deprecated: This method has 
been deprecated and will be removed in a future release.
fetchTopicMetadata(topics, brokers, producerConfig, correlationId)
^
:393:
 constructor UpdateMetadataRequest in class UpdateMetadataRequest is 
deprecated: see corresponding Javadoc for more information.
new UpdateMetadataRequest(controllerId, controllerEpoch, 
liveBrokers.asJava, partitionStates.asJava)
^
:191:
 object ProducerRequestStatsRegistry in package producer is deprecated: This 
object has been deprecated and will be removed in a future release.
ProducerRequestStatsRegistry.removeProducerRequestStats(clientId)
^
:129:
 method readFromReadableChannel in class NetworkReceive is deprecated: see 
corresponding Javadoc for more information.
  response.readFromReadableChannel(channel)
   ^
:311:
 value timestamp in class PartitionData is deprecated: see corresponding 
Javadoc for more information.
  if (partitionData.timestamp == 
OffsetCommitRequest.DEFAULT_TIMESTAMP)
^
:314:
 value timestamp in class PartitionData is deprecated: see corresponding 
Javadoc for more information.
offsetRetention + partitionData.timestamp
^
:553:
 method offsetData in class ListOffsetRequest is deprecated: see corresponding 
Javadoc for more information.
val (authorizedRequestInfo, unauthorizedRequestInfo) = 
offsetRequest.offsetData.asScala.partition {
 ^
:553:
 class PartitionData in object ListOffsetRequest is deprecated: see 
corresponding Javadoc for more information.
val (authorizedRequestInfo, unauthorizedRequestInfo) = 
offsetRequest.offsetData.asScala.partition {

  ^
:558:
 constructor PartitionData in class PartitionData is deprecated: see 
corresponding Javadoc for more information.
  new 

[jira] [Commented] (KAFKA-4212) Add a key-value store that is a TTL persistent cache

2016-09-28 Thread Elias Levy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531022#comment-15531022
 ] 

Elias Levy commented on KAFKA-4212:
---

Not sure I follow.  

> Add a key-value store that is a TTL persistent cache
> 
>
> Key: KAFKA-4212
> URL: https://issues.apache.org/jira/browse/KAFKA-4212
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.0.1
>Reporter: Elias Levy
>Assignee: Guozhang Wang
>
> Some jobs needs to maintain as state a large set of key-values for some 
> period of time.  I.e. they need to maintain a TTL cache of values potentially 
> larger than memory. 
> Currently Kafka Streams provides non-windowed and windowed key-value stores.  
> Neither is an exact fit to this use case.  
> The {{RocksDBStore}}, a {{KeyValueStore}}, stores one value per key as 
> required, but does not support expiration.  The TTL option of RocksDB is 
> explicitly not used.
> The {{RocksDBWindowsStore}}, a {{WindowsStore}}, can expire items via segment 
> dropping, but it stores multiple items per key, based on their timestamp.  
> But this store can be repurposed as a cache by fetching the items in reverse 
> chronological order and returning the first item found.
> KAFKA-2594 introduced a fixed-capacity in-memory LRU caching store, but here 
> we desire a variable-capacity memory-overflowing TTL caching store.
> Although {{RocksDBWindowsStore}} can be repurposed as a cache, it would be 
> useful to have an official and proper TTL cache API and implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3990) Kafka New Producer may raise an OutOfMemoryError

2016-09-28 Thread Andrew Olson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531011#comment-15531011
 ] 

Andrew Olson commented on KAFKA-3990:
-

I think this Jira can be closed as a duplicate of KAFKA-2512, version and magic 
byte verification should address this. We saw the same thing with the new 
Consumer when its bootstrap.servers was accidentally set to the host:port of a 
Kafka Offset Monitor (https://github.com/quantifind/KafkaOffsetMonitor) service.

> Kafka New Producer may raise an OutOfMemoryError
> 
>
> Key: KAFKA-3990
> URL: https://issues.apache.org/jira/browse/KAFKA-3990
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.1
> Environment: Docker, Base image : CentOS
> Java 8u77
> Marathon
>Reporter: Brice Dutheil
> Attachments: app-producer-config.log, kafka-broker-logs.zip
>
>
> We are regularly seeing OOME errors on a kafka producer, we first saw :
> {code}
> java.lang.OutOfMemoryError: Java heap space
> at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57) ~[na:1.8.0_77]
> at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) ~[na:1.8.0_77]
> at 
> org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:93)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:153) 
> ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:134) 
> ~[kafka-clients-0.9.0.1.jar:na]
> at org.apache.kafka.common.network.Selector.poll(Selector.java:286) 
> ~[kafka-clients-0.9.0.1.jar:na]
> at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:256) 
> ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:216) 
> ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:128) 
> ~[kafka-clients-0.9.0.1.jar:na]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_77]
> {code}
> This line refer to a buffer allocation {{ByteBuffer.allocate(receiveSize)}} 
> (see 
> https://github.com/apache/kafka/blob/0.9.0.1/clients/src/main/java/org/apache/kafka/common/network/NetworkReceive.java#L93)
> Usually the app runs fine within 200/400 MB heap and a 64 MB Metaspace. And 
> we are producing small messages 500B at most.
> Also the error don't appear on the devlopment environment, in order to 
> identify the issue we tweaked the code to give us actual data of the 
> allocation size, we got this stack :
> {code}
> 09:55:49.484 [auth] [kafka-producer-network-thread | producer-1] WARN  
> o.a.k.c.n.NetworkReceive HEAP-ISSUE: constructor : Integer='-1', String='-1'
> 09:55:49.485 [auth] [kafka-producer-network-thread | producer-1] WARN  
> o.a.k.c.n.NetworkReceive HEAP-ISSUE: method : 
> NetworkReceive.readFromReadableChannel.receiveSize=1213486160
> java.lang.OutOfMemoryError: Java heap space
> Dumping heap to /tmp/tomcat.hprof ...
> Heap dump file created [69583827 bytes in 0.365 secs]
> 09:55:50.324 [auth] [kafka-producer-network-thread | producer-1] ERROR 
> o.a.k.c.utils.KafkaThread Uncaught exception in kafka-producer-network-thread 
> | producer-1: 
> java.lang.OutOfMemoryError: Java heap space
>   at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57) ~[na:1.8.0_77]
>   at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) ~[na:1.8.0_77]
>   at 
> org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:93)
>  ~[kafka-clients-0.9.0.1.jar:na]
>   at 
> org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71)
>  ~[kafka-clients-0.9.0.1.jar:na]
>   at 
> org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:153) 
> ~[kafka-clients-0.9.0.1.jar:na]
>   at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:134) 
> ~[kafka-clients-0.9.0.1.jar:na]
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:286) 
> ~[kafka-clients-0.9.0.1.jar:na]
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:256) 
> ~[kafka-clients-0.9.0.1.jar:na]
>   at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:216) 
> ~[kafka-clients-0.9.0.1.jar:na]
>   at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:128) 
> ~[kafka-clients-0.9.0.1.jar:na]
>   at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_77]
> {code}
> Notice the size to allocate {{1213486160}} ~1.2 GB. I'm not yet sure how this 
> size is initialised.
> Notice as well that every time this OOME appear the {{NetworkReceive}} 
> constructor at 
> 

[jira] [Commented] (KAFKA-4226) Surprising NoOffsetForPartitionException for paused partition with no reset policy

2016-09-28 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530986#comment-15530986
 ] 

Jason Gustafson commented on KAFKA-4226:


[~vahid] Are you working on a clean cluster? I was able to reproduce this with 
the following test case in {{PlaintextConsumerTest}}:

{code}
  @Test(expected = classOf[NoOffsetForPartitionException])
  def testNoOffsetForPausedPartition(): Unit = {
this.consumerConfig.setProperty(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, 
"none")
val consumer0 = new KafkaConsumer(this.consumerConfig, new 
ByteArrayDeserializer(), new ByteArrayDeserializer())
consumers += consumer0

consumer0.assign(List(tp).asJava)
val partition = new TopicPartition("foo", 0)
consumer0.assign(Collections.singleton(partition))
consumer0.pause(Collections.singleton(partition))
consumer0.poll(0)
  }
{code}

> Surprising NoOffsetForPartitionException for paused partition with no reset 
> policy
> --
>
> Key: KAFKA-4226
> URL: https://issues.apache.org/jira/browse/KAFKA-4226
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Reporter: Jason Gustafson
>Priority: Minor
>
> If the user has no reset policy defined (i.e. auto.offset.reset is "none"), 
> then the consumer raises {{NoOffsetForPartitionException}} if it ever 
> encounters a situation in which it needs to reset the offset for that 
> partition. For example, this can happen when the consumer needs to set the 
> partition's initial position or if it encounters an out of range offset error 
> from a fetch. This option is helpful when you need direct control over the 
> behavior in these cases.
> I was a little surprised that the consumer currently raises this exception 
> even if the partition is in a paused state. So the following code raises the 
> exception:
> {code}
> consumerConfig.setProperty(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "none")
> val consumer = new KafkaConsumer(consumerConfig)
> consumer.assign(singleton(partition))
> consumer.pause(singleton(partition))
> consumer.poll(0)
> {code}
> Since we do not send any fetches when the partition is paused, it seems like 
> we could delay setting the offset for the partition until it is resumed. In 
> that case, the poll(0) would not raise in the example above. This would be a 
> relatively easy change, but I'm not sure if there are any downsides.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4212) Add a key-value store that is a TTL persistent cache

2016-09-28 Thread Elias Levy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530995#comment-15530995
 ] 

Elias Levy commented on KAFKA-4212:
---

I would described it as a TTL, not LRU.  We want the records to expire based on 
the record timestamp, not when the record was last accessed.

> Add a key-value store that is a TTL persistent cache
> 
>
> Key: KAFKA-4212
> URL: https://issues.apache.org/jira/browse/KAFKA-4212
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.0.1
>Reporter: Elias Levy
>Assignee: Guozhang Wang
>
> Some jobs needs to maintain as state a large set of key-values for some 
> period of time.  I.e. they need to maintain a TTL cache of values potentially 
> larger than memory. 
> Currently Kafka Streams provides non-windowed and windowed key-value stores.  
> Neither is an exact fit to this use case.  
> The {{RocksDBStore}}, a {{KeyValueStore}}, stores one value per key as 
> required, but does not support expiration.  The TTL option of RocksDB is 
> explicitly not used.
> The {{RocksDBWindowsStore}}, a {{WindowsStore}}, can expire items via segment 
> dropping, but it stores multiple items per key, based on their timestamp.  
> But this store can be repurposed as a cache by fetching the items in reverse 
> chronological order and returning the first item found.
> KAFKA-2594 introduced a fixed-capacity in-memory LRU caching store, but here 
> we desire a variable-capacity memory-overflowing TTL caching store.
> Although {{RocksDBWindowsStore}} can be repurposed as a cache, it would be 
> useful to have an official and proper TTL cache API and implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1929: HOTFIX: do not call partitioner if num partitions ...

2016-09-28 Thread guozhangwang
GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/1929

HOTFIX: do not call partitioner if num partitions is non-positive



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka 
KMinor-check-zero-num-partitions

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1929.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1929


commit 3127f9c48efafc361aca378c2c52b6a9b420e6f8
Author: Guozhang Wang 
Date:   2016-09-28T21:34:00Z

do not call partitioner if num partitions is non-positive




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4212) Add a key-value store that is a TTL persistent cache

2016-09-28 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530909#comment-15530909
 ] 

Guozhang Wang commented on KAFKA-4212:
--

Thanks for the detailed description Elias, I think a 
{{PersistentLRUCacheStore}} would be a good add to fit this scenario. 

> Add a key-value store that is a TTL persistent cache
> 
>
> Key: KAFKA-4212
> URL: https://issues.apache.org/jira/browse/KAFKA-4212
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.0.1
>Reporter: Elias Levy
>Assignee: Guozhang Wang
>
> Some jobs needs to maintain as state a large set of key-values for some 
> period of time.  I.e. they need to maintain a TTL cache of values potentially 
> larger than memory. 
> Currently Kafka Streams provides non-windowed and windowed key-value stores.  
> Neither is an exact fit to this use case.  
> The {{RocksDBStore}}, a {{KeyValueStore}}, stores one value per key as 
> required, but does not support expiration.  The TTL option of RocksDB is 
> explicitly not used.
> The {{RocksDBWindowsStore}}, a {{WindowsStore}}, can expire items via segment 
> dropping, but it stores multiple items per key, based on their timestamp.  
> But this store can be repurposed as a cache by fetching the items in reverse 
> chronological order and returning the first item found.
> KAFKA-2594 introduced a fixed-capacity in-memory LRU caching store, but here 
> we desire a variable-capacity memory-overflowing TTL caching store.
> Although {{RocksDBWindowsStore}} can be repurposed as a cache, it would be 
> useful to have an official and proper TTL cache API and implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4212) Add a key-value store that is a TTL persistent cache

2016-09-28 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530904#comment-15530904
 ] 

Guozhang Wang commented on KAFKA-4212:
--

Records within the same window with the same key will overwrite old records 
with the same key within that window, so If you create a hopping windowed store 
with the window length as TTL length that should be OK?

> Add a key-value store that is a TTL persistent cache
> 
>
> Key: KAFKA-4212
> URL: https://issues.apache.org/jira/browse/KAFKA-4212
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.0.1
>Reporter: Elias Levy
>Assignee: Guozhang Wang
>
> Some jobs needs to maintain as state a large set of key-values for some 
> period of time.  I.e. they need to maintain a TTL cache of values potentially 
> larger than memory. 
> Currently Kafka Streams provides non-windowed and windowed key-value stores.  
> Neither is an exact fit to this use case.  
> The {{RocksDBStore}}, a {{KeyValueStore}}, stores one value per key as 
> required, but does not support expiration.  The TTL option of RocksDB is 
> explicitly not used.
> The {{RocksDBWindowsStore}}, a {{WindowsStore}}, can expire items via segment 
> dropping, but it stores multiple items per key, based on their timestamp.  
> But this store can be repurposed as a cache by fetching the items in reverse 
> chronological order and returning the first item found.
> KAFKA-2594 introduced a fixed-capacity in-memory LRU caching store, but here 
> we desire a variable-capacity memory-overflowing TTL caching store.
> Although {{RocksDBWindowsStore}} can be repurposed as a cache, it would be 
> useful to have an official and proper TTL cache API and implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4226) Surprising NoOffsetForPartitionException for paused partition with no reset policy

2016-09-28 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530851#comment-15530851
 ] 

Vahid Hashemian commented on KAFKA-4226:


[~hachikuji] The following code does not produce any errors for me. Am I 
missing something?
{code}
val consumerConfig = new Properties()
consumerConfig.setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, 
"localhost:9092")
consumerConfig.setProperty(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "none")
consumerConfig.setProperty(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, 
"org.apache.kafka.common.serialization.StringDeserializer")
consumerConfig.setProperty(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, 
"org.apache.kafka.common.serialization.StringDeserializer");
val consumer = new KafkaConsumer(consumerConfig)
val partition = new TopicPartition("foo", 0)
consumer.assign(Collections.singleton(partition))
consumer.pause(Collections.singleton(partition))
consumer.poll(0)
{code}

> Surprising NoOffsetForPartitionException for paused partition with no reset 
> policy
> --
>
> Key: KAFKA-4226
> URL: https://issues.apache.org/jira/browse/KAFKA-4226
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Reporter: Jason Gustafson
>Priority: Minor
>
> If the user has no reset policy defined (i.e. auto.offset.reset is "none"), 
> then the consumer raises {{NoOffsetForPartitionException}} if it ever 
> encounters a situation in which it needs to reset the offset for that 
> partition. For example, this can happen when the consumer needs to set the 
> partition's initial position or if it encounters an out of range offset error 
> from a fetch. This option is helpful when you need direct control over the 
> behavior in these cases.
> I was a little surprised that the consumer currently raises this exception 
> even if the partition is in a paused state. So the following code raises the 
> exception:
> {code}
> consumerConfig.setProperty(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "none")
> val consumer = new KafkaConsumer(consumerConfig)
> consumer.assign(singleton(partition))
> consumer.pause(singleton(partition))
> consumer.poll(0)
> {code}
> Since we do not send any fetches when the partition is paused, it seems like 
> we could delay setting the offset for the partition until it is resumed. In 
> that case, the poll(0) would not raise in the example above. This would be a 
> relatively easy change, but I'm not sure if there are any downsides.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4225) Replication Quotas: Control Leader & Follower Limit Separately

2016-09-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530714#comment-15530714
 ] 

ASF GitHub Bot commented on KAFKA-4225:
---

GitHub user benstopford opened a pull request:

https://github.com/apache/kafka/pull/1928

KAFKA-4225: Replication Quotas: Control Leader & Follower Limit Separately

See final commit. 

(depends on https://github.com/apache/kafka/pull/1906)

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/benstopford/kafka KAFKA-4225-over-KAFKA-4216

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1928.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1928


commit ddcb115fa6a098cef79209ae8022a4467b8ea873
Author: Ben Stopford 
Date:   2016-09-26T14:26:11Z

KAFKA-4216: First cut of split of leader/follwer replica lists.

commit bee043a85f4f8034785f3f83447e732dcd8d316e
Author: Ben Stopford 
Date:   2016-09-26T15:34:41Z

KAFKA-4216: Refactored the way we calculate which replicas should be 
throttled. No functional change in this commit.

commit 1c416f465ccfe6c9c57c2e6e86c71616034e8b99
Author: Ben Stopford 
Date:   2016-09-27T09:12:05Z

KAFKA-4177: Refactored the way we derive the throttled replica list (it was 
super ugly). No functional change

commit 637316e23a4cd3290101dd62080158a36c63f57c
Author: Ben Stopford 
Date:   2016-09-27T09:17:54Z

KAFKA-4177: Formatting only based on Jun's feedback

commit 2e20c50c94242e9a1a59cfdb826cb68739e5bed0
Author: Ben Stopford 
Date:   2016-09-27T09:24:03Z

KAFKA-4177: remove reformatting of license file text

commit 70f263e1b6371bb27a1afd33299cf378cd8d3f67
Author: Ben Stopford 
Date:   2016-09-27T14:36:46Z

KAFKA-4216: small formatting change.

commit bc07a348283aa307bde6a32d0cf7e84cb7b1ad64
Author: Ben Stopford 
Date:   2016-09-27T14:46:52Z

KAFKA-4216: Improved comments

commit bab1c38a370f8d0b73ac9a3ed40880da4ee95e38
Author: Ben Stopford 
Date:   2016-09-28T10:59:46Z

KAFKA-4216: Consolidated code that creates properties

commit 97acd19854b19564b9db4aac3166d18fdcd9ce0f
Author: Ben Stopford 
Date:   2016-09-28T14:59:47Z

KAFKA-4216: Change throttled replica list property to be of type LIST not 
STRING

commit 17af1e728d3e32a44ea0cbcee1967e31ce401ca1
Author: Ben Stopford 
Date:   2016-09-28T15:26:38Z

KAFKA-4216: Stylistic / logging changes only

commit faedb50f8646a24da94bac2ed047dcda326bdb8d
Author: Ben Stopford 
Date:   2016-09-28T16:13:07Z

KAFKA-4216: Added validation to ensure that all partitions in the proposed 
assignment exist in the cluster

commit b39a4a4870dca46750b4063e79b1b8fc8aaf79df
Author: Ben Stopford 
Date:   2016-09-28T16:23:13Z

KAFKA-4216: Small change to add distinct to the topic selection

commit ae65db17eb53e5822e7bc137fc94c2ee7c73392f
Author: Ben Stopford 
Date:   2016-09-28T16:24:42Z

KAFKA-4216: Alteration to previous commit. One of the distict calls was not 
needed.

commit dd2d161cb0dc5ad785f93427cfc7a63c9ebbdb14
Author: Ben Stopford 
Date:   2016-09-28T16:26:34Z

KAFKA-4216: Removed long line

commit 5080f9560b7657e552564aca35e633a322631b48
Author: Ben Stopford 
Date:   2016-09-28T16:34:39Z

KAFKA-4216: Ismael's suggested '(proposed(tp).toSet -- current)' change

commit 67ac7bea374764100a79033e4245f7c09a6d435b
Author: Ben Stopford 
Date:   2016-09-28T16:37:07Z

KAFKA-4216: Ismael's suggestion to format method

commit 797fded9d9b9d1df1f4b76fbbae74dad07b98ee8
Author: Ben Stopford 
Date:   2016-09-28T16:40:13Z

KAFKA-4216: Stylistic changes.

commit 5d06a265847d558ad60c1b82c9bfb1b30dce47b0
Author: Ben Stopford 
Date:   2016-09-28T16:41:11Z

KAFKA-4216: Add text to fail()

commit 84b5602e01636bb8ffa7cdabe1e9b782d67f5e22
Author: Ben Stopford 
Date:   2016-09-28T17:32:58Z

KAFKA-4216: silly error in test.

commit 784c5cdf81be1abf13b8959cf7d3f3bbef2c2d12
Author: Ben Stopford 
Date:   2016-09-28T19:40:52Z

KAFKA-4225: Replication Quotas: Control Leader & Follower Limit Separately




> Replication Quotas: Control Leader & Follower Limit Separately
> --
>
> Key: KAFKA-4225
> URL: https://issues.apache.org/jira/browse/KAFKA-4225
> Project: Kafka
>  

[GitHub] kafka pull request #1928: KAFKA-4225: Replication Quotas: Control Leader & F...

2016-09-28 Thread benstopford
GitHub user benstopford opened a pull request:

https://github.com/apache/kafka/pull/1928

KAFKA-4225: Replication Quotas: Control Leader & Follower Limit Separately

See final commit. 

(depends on https://github.com/apache/kafka/pull/1906)

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/benstopford/kafka KAFKA-4225-over-KAFKA-4216

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1928.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1928


commit ddcb115fa6a098cef79209ae8022a4467b8ea873
Author: Ben Stopford 
Date:   2016-09-26T14:26:11Z

KAFKA-4216: First cut of split of leader/follwer replica lists.

commit bee043a85f4f8034785f3f83447e732dcd8d316e
Author: Ben Stopford 
Date:   2016-09-26T15:34:41Z

KAFKA-4216: Refactored the way we calculate which replicas should be 
throttled. No functional change in this commit.

commit 1c416f465ccfe6c9c57c2e6e86c71616034e8b99
Author: Ben Stopford 
Date:   2016-09-27T09:12:05Z

KAFKA-4177: Refactored the way we derive the throttled replica list (it was 
super ugly). No functional change

commit 637316e23a4cd3290101dd62080158a36c63f57c
Author: Ben Stopford 
Date:   2016-09-27T09:17:54Z

KAFKA-4177: Formatting only based on Jun's feedback

commit 2e20c50c94242e9a1a59cfdb826cb68739e5bed0
Author: Ben Stopford 
Date:   2016-09-27T09:24:03Z

KAFKA-4177: remove reformatting of license file text

commit 70f263e1b6371bb27a1afd33299cf378cd8d3f67
Author: Ben Stopford 
Date:   2016-09-27T14:36:46Z

KAFKA-4216: small formatting change.

commit bc07a348283aa307bde6a32d0cf7e84cb7b1ad64
Author: Ben Stopford 
Date:   2016-09-27T14:46:52Z

KAFKA-4216: Improved comments

commit bab1c38a370f8d0b73ac9a3ed40880da4ee95e38
Author: Ben Stopford 
Date:   2016-09-28T10:59:46Z

KAFKA-4216: Consolidated code that creates properties

commit 97acd19854b19564b9db4aac3166d18fdcd9ce0f
Author: Ben Stopford 
Date:   2016-09-28T14:59:47Z

KAFKA-4216: Change throttled replica list property to be of type LIST not 
STRING

commit 17af1e728d3e32a44ea0cbcee1967e31ce401ca1
Author: Ben Stopford 
Date:   2016-09-28T15:26:38Z

KAFKA-4216: Stylistic / logging changes only

commit faedb50f8646a24da94bac2ed047dcda326bdb8d
Author: Ben Stopford 
Date:   2016-09-28T16:13:07Z

KAFKA-4216: Added validation to ensure that all partitions in the proposed 
assignment exist in the cluster

commit b39a4a4870dca46750b4063e79b1b8fc8aaf79df
Author: Ben Stopford 
Date:   2016-09-28T16:23:13Z

KAFKA-4216: Small change to add distinct to the topic selection

commit ae65db17eb53e5822e7bc137fc94c2ee7c73392f
Author: Ben Stopford 
Date:   2016-09-28T16:24:42Z

KAFKA-4216: Alteration to previous commit. One of the distict calls was not 
needed.

commit dd2d161cb0dc5ad785f93427cfc7a63c9ebbdb14
Author: Ben Stopford 
Date:   2016-09-28T16:26:34Z

KAFKA-4216: Removed long line

commit 5080f9560b7657e552564aca35e633a322631b48
Author: Ben Stopford 
Date:   2016-09-28T16:34:39Z

KAFKA-4216: Ismael's suggested '(proposed(tp).toSet -- current)' change

commit 67ac7bea374764100a79033e4245f7c09a6d435b
Author: Ben Stopford 
Date:   2016-09-28T16:37:07Z

KAFKA-4216: Ismael's suggestion to format method

commit 797fded9d9b9d1df1f4b76fbbae74dad07b98ee8
Author: Ben Stopford 
Date:   2016-09-28T16:40:13Z

KAFKA-4216: Stylistic changes.

commit 5d06a265847d558ad60c1b82c9bfb1b30dce47b0
Author: Ben Stopford 
Date:   2016-09-28T16:41:11Z

KAFKA-4216: Add text to fail()

commit 84b5602e01636bb8ffa7cdabe1e9b782d67f5e22
Author: Ben Stopford 
Date:   2016-09-28T17:32:58Z

KAFKA-4216: silly error in test.

commit 784c5cdf81be1abf13b8959cf7d3f3bbef2c2d12
Author: Ben Stopford 
Date:   2016-09-28T19:40:52Z

KAFKA-4225: Replication Quotas: Control Leader & Follower Limit Separately




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-4227) AdminManager is not shutdown when KafkaServer is shutdown

2016-09-28 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-4227.

Resolution: Fixed

> AdminManager is not shutdown when KafkaServer is shutdown
> -
>
> Key: KAFKA-4227
> URL: https://issues.apache.org/jira/browse/KAFKA-4227
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.10.1.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.10.1.0
>
>
> Lots of ExpirationReaper threads are left behind in unit tests, increasing 
> memory usage for tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4227) AdminManager is not shutdown when KafkaServer is shutdown

2016-09-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530699#comment-15530699
 ] 

ASF GitHub Bot commented on KAFKA-4227:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1927


> AdminManager is not shutdown when KafkaServer is shutdown
> -
>
> Key: KAFKA-4227
> URL: https://issues.apache.org/jira/browse/KAFKA-4227
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.10.1.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.10.1.0
>
>
> Lots of ExpirationReaper threads are left behind in unit tests, increasing 
> memory usage for tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4227) AdminManager is not shutdown when KafkaServer is shutdown

2016-09-28 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-4227:
---
Fix Version/s: 0.10.1.0

> AdminManager is not shutdown when KafkaServer is shutdown
> -
>
> Key: KAFKA-4227
> URL: https://issues.apache.org/jira/browse/KAFKA-4227
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.10.1.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.10.1.0
>
>
> Lots of ExpirationReaper threads are left behind in unit tests, increasing 
> memory usage for tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1927: KAFKA-4227: Shutdown AdminManager when KafkaServer...

2016-09-28 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1927


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk8 #921

2016-09-28 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-3708: Better exception handling in Kafka Streams

--
[...truncated 13785 lines...]
org.apache.kafka.streams.processor.internals.MinTimestampTrackerTest > 
testTracking PASSED

org.apache.kafka.streams.processor.internals.AbstractTaskTest > 
shouldThrowProcessorStateExceptionOnInitializeOffsetsWhenAuthorizationException 
STARTED

org.apache.kafka.streams.processor.internals.AbstractTaskTest > 
shouldThrowProcessorStateExceptionOnInitializeOffsetsWhenAuthorizationException 
PASSED

org.apache.kafka.streams.processor.internals.AbstractTaskTest > 
shouldThrowProcessorStateExceptionOnInitializeOffsetsWhenKafkaException STARTED

org.apache.kafka.streams.processor.internals.AbstractTaskTest > 
shouldThrowProcessorStateExceptionOnInitializeOffsetsWhenKafkaException PASSED

org.apache.kafka.streams.processor.internals.AbstractTaskTest > 
shouldThrowWakeupExceptionOnInitializeOffsetsWhenWakeupException STARTED

org.apache.kafka.streams.processor.internals.AbstractTaskTest > 
shouldThrowWakeupExceptionOnInitializeOffsetsWhenWakeupException PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldThrowExceptionIfApplicationServerConfigIsNotHostPortPair STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldThrowExceptionIfApplicationServerConfigIsNotHostPortPair PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldMapUserEndPointToTopicPartitions STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldMapUserEndPointToTopicPartitions PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldAddUserDefinedEndPointToSubscription STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldAddUserDefinedEndPointToSubscription PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithStandbyReplicas STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithStandbyReplicas PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithNewTasks STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithNewTasks PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldExposeHostStateToTopicPartitionsOnAssignment STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldExposeHostStateToTopicPartitionsOnAssignment PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithStates STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithStates PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldThrowExceptionIfApplicationServerConfigPortIsNotAnInteger STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldThrowExceptionIfApplicationServerConfigPortIsNotAnInteger PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldReturnEmptyClusterMetadataIfItHasntBeenBuilt STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldReturnEmptyClusterMetadataIfItHasntBeenBuilt PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignBasic STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignBasic PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testOnAssignment STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testOnAssignment PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithInternalTopicThatsSourceIsAnotherInternalTopic STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithInternalTopicThatsSourceIsAnotherInternalTopic PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testSubscription STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testSubscription PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldSetClusterMetadataOnAssignment STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldSetClusterMetadataOnAssignment PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithInternalTopics STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithInternalTopics PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullProcessorSupplier STARTED


[jira] [Commented] (KAFKA-4185) Abstract out password verifier in SaslServer as an injectable dependency

2016-09-28 Thread Piyush Vijay (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530580#comment-15530580
 ] 

Piyush Vijay commented on KAFKA-4185:
-

[~ecomar], even though I agree that procedure to add new SaslServer 
implementation is well-documented, I believe that there is a lot of value for 
end users if we just let this patch in. Why have them copy 'n paste some code 
when it can be avoided? What is the objection?


> Abstract out password verifier in SaslServer as an injectable dependency
> 
>
> Key: KAFKA-4185
> URL: https://issues.apache.org/jira/browse/KAFKA-4185
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.10.0.1
>Reporter: Piyush Vijay
> Fix For: 0.10.0.2
>
>
> Kafka comes with a default SASL/PLAIN implementation which assumes that 
> username and password are present in a JAAS
> config file. People often want to use some other way to provide username and 
> password to SaslServer. Their best bet,
> currently, is to have their own implementation of SaslServer (which would be, 
> in most cases, a copied version of PlainSaslServer
> minus the logic where password verification happens). This is not ideal.
> We believe that there exists a better way to structure the current 
> PlainSaslServer implementation which makes it very
> easy for people to plug-in their custom password verifier without having to 
> rewrite SaslServer or copy any code.
> The idea is to have an injectable dependency interface PasswordVerifier which 
> can be re-implemented based on the
> requirements. There would be no need to re-implement or extend 
> PlainSaslServer class.
> Note that this is commonly asked feature and there have been some attempts in 
> the past to solve this problem:
> https://github.com/apache/kafka/pull/1350
> https://github.com/apache/kafka/pull/1770
> https://issues.apache.org/jira/browse/KAFKA-2629
> https://issues.apache.org/jira/browse/KAFKA-3679
> We believe that this proposed solution does not have the demerits because of 
> previous proposals were rejected.
> I would be happy to discuss more.
> Please find the link to the PR in the comments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4186) Transient failure in KStreamAggregationIntegrationTest.shouldGroupByKey

2016-09-28 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-4186:
---
Fix Version/s: 0.10.1.0

> Transient failure in KStreamAggregationIntegrationTest.shouldGroupByKey
> ---
>
> Key: KAFKA-4186
> URL: https://issues.apache.org/jira/browse/KAFKA-4186
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Jason Gustafson
> Fix For: 0.10.1.0
>
>
> Saw this running locally off of trunk:
> {code}
> org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
> shouldGroupByKey[1] FAILED
> java.lang.AssertionError: Condition not met within timeout 6. Did not 
> receive 10 number of records
> at 
> org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:268)
> at 
> org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinKeyValueRecordsReceived(IntegrationTestUtils.java:211)
> at 
> org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest.receiveMessages(KStreamAggregationIntegrationTest.java:480)
> at 
> org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest.shouldGroupByKey(KStreamAggregationIntegrationTest.java:407)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4222) Transient failure in QueryableStateIntegrationTest.queryOnRebalance

2016-09-28 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530456#comment-15530456
 ] 

Vahid Hashemian commented on KAFKA-4222:


[Another 
example|https://builds.apache.org/job/kafka-trunk-git-pr-jdk7/5921/testReport/junit/org.apache.kafka.streams.integration/QueryableStateIntegrationTest/queryOnRebalance_0_/]
 of this failure.

> Transient failure in QueryableStateIntegrationTest.queryOnRebalance
> ---
>
> Key: KAFKA-4222
> URL: https://issues.apache.org/jira/browse/KAFKA-4222
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Reporter: Jason Gustafson
>Assignee: Eno Thereska
> Fix For: 0.10.1.0
>
>
> Seen here: https://builds.apache.org/job/kafka-trunk-jdk8/915/console
> {code}
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
> queryOnRebalance[1] FAILED
> java.lang.AssertionError: Condition not met within timeout 3. waiting 
> for metadata, store and value to be non null
> at 
> org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:276)
> at 
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest.verifyAllKVKeys(QueryableStateIntegrationTest.java:263)
> at 
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest.queryOnRebalance(QueryableStateIntegrationTest.java:342)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4227) AdminManager is not shutdown when KafkaServer is shutdown

2016-09-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530432#comment-15530432
 ] 

ASF GitHub Bot commented on KAFKA-4227:
---

GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/1927

KAFKA-4227: Shutdown AdminManager when KafkaServer is shutdown

Terminate topic purgatory thread in AdminManager during server shutdown to 
avoid threads being left around in unit tests.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-4227

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1927.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1927


commit 8fb83dd9300fb12846a1b8082532d72487754a01
Author: Rajini Sivaram 
Date:   2016-09-28T18:08:08Z

KAFKA-4227: Shutdown AdminManager when KafkaServer is shutdown




> AdminManager is not shutdown when KafkaServer is shutdown
> -
>
> Key: KAFKA-4227
> URL: https://issues.apache.org/jira/browse/KAFKA-4227
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.10.1.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>
> Lots of ExpirationReaper threads are left behind in unit tests, increasing 
> memory usage for tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1927: KAFKA-4227: Shutdown AdminManager when KafkaServer...

2016-09-28 Thread rajinisivaram
GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/1927

KAFKA-4227: Shutdown AdminManager when KafkaServer is shutdown

Terminate topic purgatory thread in AdminManager during server shutdown to 
avoid threads being left around in unit tests.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-4227

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1927.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1927


commit 8fb83dd9300fb12846a1b8082532d72487754a01
Author: Rajini Sivaram 
Date:   2016-09-28T18:08:08Z

KAFKA-4227: Shutdown AdminManager when KafkaServer is shutdown




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2057) DelayedOperationTest.testRequestExpiry transient failure

2016-09-28 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530417#comment-15530417
 ] 

Ismael Juma commented on KAFKA-2057:


There is a PR for the issue that cropped up again (and seems to be a different 
one): https://github.com/apache/kafka/pull/1890
It doesn't seem to fix it though even though it makes logical sense.

> DelayedOperationTest.testRequestExpiry transient failure
> 
>
> Key: KAFKA-2057
> URL: https://issues.apache.org/jira/browse/KAFKA-2057
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.10.1.0
>Reporter: Guozhang Wang
>Assignee: Rajini Sivaram
>  Labels: newbie
> Attachments: KAFKA-2057.patch
>
>
> {code}
> kafka.server.DelayedOperationTest > testRequestExpiry FAILED
> junit.framework.AssertionFailedError: Time for expiration 19 should at 
> least 20
> at junit.framework.Assert.fail(Assert.java:47)
> at junit.framework.Assert.assertTrue(Assert.java:20)
> at 
> kafka.server.DelayedOperationTest.testRequestExpiry(DelayedOperationTest.scala:68)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3708) Rethink exception handling in KafkaStreams

2016-09-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530414#comment-15530414
 ] 

ASF GitHub Bot commented on KAFKA-3708:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1819


> Rethink exception handling in KafkaStreams
> --
>
> Key: KAFKA-3708
> URL: https://issues.apache.org/jira/browse/KAFKA-3708
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Affects Versions: 0.10.0.0
>Reporter: Guozhang Wang
>Assignee: Damian Guy
>  Labels: user-experience
> Fix For: 0.10.1.0, 0.10.2.0
>
>
> As for 0.10.0.0, the worker threads (i.e. {{StreamThreads}}) can possibly 
> encounter the following runtime exceptions:
> 1) {{consumer.poll()}} could throw KafkaException if some of the 
> configuration are not accepted, such as topics not authorized to read / write 
> (security), session-timeout value not valid, etc; these exceptions will be 
> thrown in the first ever {{poll()}}.
> 2) {{task.addRecords()}} could throw KafkaException (most likely 
> SerializationException) if the deserialization fails.
> 3) {{task.process() / punctuate()}} could throw various KafkaException; for 
> example, serialization / deserialization errors, state storage operation 
> failures (RocksDBException, for example),  producer sending failures, etc.
> 4) {{maybeCommit / commitAll / commitOne}} could throw various Exceptions if 
> the flushing of state store fails, and when {{consumer.commitSync}} throws 
> exceptions other than {{CommitFailedException}}.
> For all the above 4 cases, KafkaStreams does not capture and handle them, but 
> expose them to users, and let users to handle them via 
> {{KafkaStreams.setUncaughtExceptionHandler}}. We need to re-think if the 
> library should just handle these cases without exposing them to users and 
> kill the threads / migrate tasks to others since they are all not recoverable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1819: KAFKA-3708: rethink exception handling in Kafka St...

2016-09-28 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1819


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-3708) Rethink exception handling in KafkaStreams

2016-09-28 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-3708.
--
   Resolution: Fixed
Fix Version/s: 0.10.2.0

Issue resolved by pull request 1819
[https://github.com/apache/kafka/pull/1819]

> Rethink exception handling in KafkaStreams
> --
>
> Key: KAFKA-3708
> URL: https://issues.apache.org/jira/browse/KAFKA-3708
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Affects Versions: 0.10.0.0
>Reporter: Guozhang Wang
>Assignee: Damian Guy
>  Labels: user-experience
> Fix For: 0.10.1.0, 0.10.2.0
>
>
> As for 0.10.0.0, the worker threads (i.e. {{StreamThreads}}) can possibly 
> encounter the following runtime exceptions:
> 1) {{consumer.poll()}} could throw KafkaException if some of the 
> configuration are not accepted, such as topics not authorized to read / write 
> (security), session-timeout value not valid, etc; these exceptions will be 
> thrown in the first ever {{poll()}}.
> 2) {{task.addRecords()}} could throw KafkaException (most likely 
> SerializationException) if the deserialization fails.
> 3) {{task.process() / punctuate()}} could throw various KafkaException; for 
> example, serialization / deserialization errors, state storage operation 
> failures (RocksDBException, for example),  producer sending failures, etc.
> 4) {{maybeCommit / commitAll / commitOne}} could throw various Exceptions if 
> the flushing of state store fails, and when {{consumer.commitSync}} throws 
> exceptions other than {{CommitFailedException}}.
> For all the above 4 cases, KafkaStreams does not capture and handle them, but 
> expose them to users, and let users to handle them via 
> {{KafkaStreams.setUncaughtExceptionHandler}}. We need to re-think if the 
> library should just handle these cases without exposing them to users and 
> kill the threads / migrate tasks to others since they are all not recoverable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4227) AdminManager is not shutdown when KafkaServer is shutdown

2016-09-28 Thread Rajini Sivaram (JIRA)
Rajini Sivaram created KAFKA-4227:
-

 Summary: AdminManager is not shutdown when KafkaServer is shutdown
 Key: KAFKA-4227
 URL: https://issues.apache.org/jira/browse/KAFKA-4227
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.10.1.0
Reporter: Rajini Sivaram
Assignee: Rajini Sivaram


Lots of ExpirationReaper threads are left behind in unit tests, increasing 
memory usage for tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-4058) Failure in org.apache.kafka.streams.integration.ResetIntegrationTest.testReprocessingFromScratchAfterReset

2016-09-28 Thread Eno Thereska (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eno Thereska resolved KAFKA-4058.
-
Resolution: Fixed

I haven't seen this in a while, let's resolve for now.

> Failure in 
> org.apache.kafka.streams.integration.ResetIntegrationTest.testReprocessingFromScratchAfterReset
> --
>
> Key: KAFKA-4058
> URL: https://issues.apache.org/jira/browse/KAFKA-4058
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Matthias J. Sax
>  Labels: test
> Fix For: 0.10.1.0
>
>
> {code}
> java.lang.AssertionError: expected:<0> but was:<1>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at org.junit.Assert.assertEquals(Assert.java:631)
>   at 
> org.apache.kafka.streams.integration.ResetIntegrationTest.cleanGlobal(ResetIntegrationTest.java:225)
>   at 
> org.apache.kafka.streams.integration.ResetIntegrationTest.testReprocessingFromScratchAfterReset(ResetIntegrationTest.java:103)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
>   at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:377)
>   at 
> 

[jira] [Reopened] (KAFKA-2057) DelayedOperationTest.testRequestExpiry transient failure

2016-09-28 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang reopened KAFKA-2057:
--

> DelayedOperationTest.testRequestExpiry transient failure
> 
>
> Key: KAFKA-2057
> URL: https://issues.apache.org/jira/browse/KAFKA-2057
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.10.1.0
>Reporter: Guozhang Wang
>Assignee: Rajini Sivaram
>  Labels: newbie
> Attachments: KAFKA-2057.patch
>
>
> {code}
> kafka.server.DelayedOperationTest > testRequestExpiry FAILED
> junit.framework.AssertionFailedError: Time for expiration 19 should at 
> least 20
> at junit.framework.Assert.fail(Assert.java:47)
> at junit.framework.Assert.assertTrue(Assert.java:20)
> at 
> kafka.server.DelayedOperationTest.testRequestExpiry(DelayedOperationTest.scala:68)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2057) DelayedOperationTest.testRequestExpiry transient failure

2016-09-28 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15530336#comment-15530336
 ] 

Guozhang Wang commented on KAFKA-2057:
--

Re-open the issue since it is observed in Jenkins again:

https://builds.apache.org/job/kafka-trunk-git-pr-jdk7/5928/testReport/junit/kafka.server/DelayedOperationTest/testRequestExpiry/

{code}
java.lang.AssertionError: Time for expiration 19 should at least 20
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
kafka.server.DelayedOperationTest.testRequestExpiry(DelayedOperationTest.scala:66)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:377)
at 
org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
at 
org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(StoppableExecutorImpl.java:40)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{code}

> DelayedOperationTest.testRequestExpiry transient failure
> 
>
> Key: KAFKA-2057
> URL: https://issues.apache.org/jira/browse/KAFKA-2057
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.10.1.0
>Reporter: Guozhang Wang
>Assignee: Rajini Sivaram
>   

[jira] [Updated] (KAFKA-2057) DelayedOperationTest.testRequestExpiry transient failure

2016-09-28 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-2057:
-
Affects Version/s: 0.10.1.0

> DelayedOperationTest.testRequestExpiry transient failure
> 
>
> Key: KAFKA-2057
> URL: https://issues.apache.org/jira/browse/KAFKA-2057
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.10.1.0
>Reporter: Guozhang Wang
>Assignee: Rajini Sivaram
>  Labels: newbie
> Attachments: KAFKA-2057.patch
>
>
> {code}
> kafka.server.DelayedOperationTest > testRequestExpiry FAILED
> junit.framework.AssertionFailedError: Time for expiration 19 should at 
> least 20
> at junit.framework.Assert.fail(Assert.java:47)
> at junit.framework.Assert.assertTrue(Assert.java:20)
> at 
> kafka.server.DelayedOperationTest.testRequestExpiry(DelayedOperationTest.scala:68)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4216) Replication Quotas: Control Leader & Follower Throttled Replicas Separately

2016-09-28 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-4216:
---
Fix Version/s: 0.10.1.0

> Replication Quotas: Control Leader & Follower Throttled Replicas Separately
> ---
>
> Key: KAFKA-4216
> URL: https://issues.apache.org/jira/browse/KAFKA-4216
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.0
>Reporter: Ben Stopford
>Assignee: Ben Stopford
> Fix For: 0.10.1.0
>
>
> The current algorithm in kafka-reassign-partitions applies a throttle to all 
> moving replicas, be they leader-side or follower-side. 
> A preferable solution would be to change the throttled replica list to 
> specify whether the throttle applies to leader or follower. That way we can 
> ensure that the regular replication will not be throttled.  
> To do this we should change the way the throttled replica list is specified 
> so it is spread over two separate properties. One that corresponds to the 
> leader-side throttle, and the other that corresponds to the follower-side 
> throttle.
> quota.leader.replication.throttled.replicas = 
> [partId]:[replica],[partId]:[replica],[partId]:[replica] 
> quota.follower.replication.throttled.replicas = 
> [partId]:[replica],[partId]:[replica],[partId]:[replica] 
> Then, when applying the throttle, the leader quota can be applied to all 
> current replicas, and the follower quota can be applied only to the new 
> replicas we are creating as part of the rebalance. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1926: MINOR: Set JVM parameters for the Gradle Test exec...

2016-09-28 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/1926

MINOR: Set JVM parameters for the Gradle Test executor processes

We suspect that the test suite hangs we have been seeing are
due to PermGen exhaustion. It is a common reason for
hard JVM lock-ups.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka test-jvm-params

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1926.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1926


commit f863c9443d11e169402a828cb2857bd0470b28bb
Author: Ismael Juma 
Date:   2016-09-28T16:04:21Z

Set JVM parameters for the Gradle Test executor processes

We suspect that the test suite hangs we have been seeing are
due to PermGen exhaustion. It is a common reason for
hard JVM lock-ups.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #1925: Remove duplicate paragraph in Log Compaction docs

2016-09-28 Thread IvanVergiliev
GitHub user IvanVergiliev opened a pull request:

https://github.com/apache/kafka/pull/1925

Remove duplicate paragraph in Log Compaction docs

It seems that the last guarantee in the Log Compaction guarantees list is 
duplicated, so I removed the one with less formatting.

In case I'm missing something and these two paragraphs state two separate 
guarantees, I think the difference between them should be explained explicitly.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/IvanVergiliev/kafka patch-1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1925.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1925


commit 77a1bd29cd57f71ea8175f913aefc42599d02837
Author: Ivan Vergiliev 
Date:   2016-09-28T13:04:25Z

Remove duplicate paragraph in Log Compaction docs




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3396) Unauthorized topics are returned to the user

2016-09-28 Thread Edoardo Comar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15529377#comment-15529377
 ] 

Edoardo Comar commented on KAFKA-3396:
--

new PR is https://github.com/apache/kafka/pull/1908

> Unauthorized topics are returned to the user
> 
>
> Key: KAFKA-3396
> URL: https://issues.apache.org/jira/browse/KAFKA-3396
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.9.0.0, 0.10.0.0
>Reporter: Grant Henke
>Assignee: Edoardo Comar
>Priority: Critical
> Fix For: 0.10.1.0
>
>
> Kafka's clients and protocol exposes unauthorized topics to the end user. 
> This is often considered a security hole. To some, the topic name is 
> considered sensitive information. Those that do not consider the name 
> sensitive, still consider it more information that allows a user to try and 
> circumvent security.  Instead, if a user does not have access to the topic, 
> the servers should act as if the topic does not exist. 
> To solve this some of the changes could include:
>   - The broker should not return a TOPIC_AUTHORIZATION(29) error for 
> requests (metadata, produce, fetch, etc) that include a topic that the user 
> does not have DESCRIBE access to.
>   - A user should not receive a TopicAuthorizationException when they do 
> not have DESCRIBE access to a topic or the cluster.
>  - The client should not maintain and expose a list of unauthorized 
> topics in org.apache.kafka.common.Cluster. 
> Other changes may be required that are not listed here. Further analysis is 
> needed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : kafka-trunk-jdk7 #1577

2016-09-28 Thread Apache Jenkins Server
See 



Re: [DISCUSS] KIP-83 - Allow multiple SASL PLAIN authenticated Java clients in a single JVM process

2016-09-28 Thread Edoardo Comar
Thanks Rajini and Harsha
I'll update the KIP 
--
Edoardo Comar
MQ Cloud Technologies
eco...@uk.ibm.com
+44 (0)1962 81 5576 
IBM UK Ltd, Hursley Park, SO21 2JN

IBM United Kingdom Limited Registered in England and Wales with number 
741598 Registered office: PO Box 41, North Harbour, Portsmouth, Hants. PO6 
3AU



From:   Rajini Sivaram 
To: dev@kafka.apache.org
Date:   28/09/2016 10:40
Subject:Re: [DISCUSS] KIP-83 - Allow multiple SASL PLAIN 
authenticated Java clients in a single JVM process



Edo,

I was going to write a KIP for this next week :-) I was thinking along the
same lines as Harsha, i.e., enable multiple LoginManagers to co-exist in a
JVM. The multi-user login module in MessageHub made sense at the time to
overcome the limitation in Kafka, without changing Kafka itself. But for
the KIP, it would be better to have a solution that supports multiple 
users
for any SASL mechanism.


On Wed, Sep 28, 2016 at 5:57 AM, Harsha Chintalapani 
wrote:

> Edorado,
> Thanks for the KIP. As pointed out in the JIRA can you make
> sure this is not just a specific change for SASL plain but make changes 
in
> general to LoginManager such that its not a singleton.
>
> Thanks,
> Harsha
>
> On Tue, Sep 27, 2016 at 10:15 AM Edoardo Comar  
wrote:
>
> > Hi,
> > I had a go at a KIP that addresses this JIRA
> > https://issues.apache.org/jira/browse/KAFKA-4180
> > "Shared authentification with multiple actives Kafka 
producers/consumers"
> >
> > which is a limitation of the current Java client that we (IBM 
MessageHub)
> > get asked quite often lately.
> >
> > We will have a go at a PR soon, just as a proof of concept, but as it
> > introduces new public interfaces it needs a KIP.
> >
> > I'll welcome your input.
> >
> > Edo
> > --
> > Edoardo Comar
> > MQ Cloud Technologies
> > eco...@uk.ibm.com
> > +44 (0)1962 81 5576
> > IBM UK Ltd, Hursley Park, SO21 2JN
> >
> > IBM United Kingdom Limited Registered in England and Wales with number
> > 741598 Registered office: PO Box 41, North Harbour, Portsmouth, Hants.
> PO6
> > 3AU
> > Unless stated otherwise above:
> > IBM United Kingdom Limited - Registered in England and Wales with 
number
> > 741598.
> > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6
> 3AU
> >
>



-- 
Regards,

Rajini



Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU


[jira] [Commented] (KAFKA-3296) All consumer reads hang indefinately

2016-09-28 Thread JIRA

[ 
https://issues.apache.org/jira/browse/KAFKA-3296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15529088#comment-15529088
 ] 

Anıl Chalil commented on KAFKA-3296:


Is it possible to circumvent that issue by not using autogenerated ids? Or any 
other workaround ?

> All consumer reads hang indefinately
> 
>
> Key: KAFKA-3296
> URL: https://issues.apache.org/jira/browse/KAFKA-3296
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0, 0.9.0.1
>Reporter: Simon Cooper
>Priority: Critical
> Attachments: controller.zip, kafkalogs.zip
>
>
> We've got several integration tests that bring up systems on VMs for testing. 
> We've recently upgraded to 0.9, and very occasionally we occasionally see an 
> issue where every consumer that tries to read from the broker hangs, spamming 
> the following in their logs:
> {code}2016-02-26T12:25:37,856 | DEBUG | o.a.k.c.NetworkClient 
> [pool-10-thread-1] | Sending metadata request 
> ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=21905,client_id=consumer-1},
>  body={topics=[Topic1]}), isInitiatedByNetworkClient, 
> createdTimeMs=1456489537856, sendTimeMs=0) to node 1
> 2016-02-26T12:25:37,856 | DEBUG | o.a.k.c.Metadata [pool-10-thread-1] | 
> Updated cluster metadata version 10954 to Cluster(nodes = [Node(1, 
> server.internal, 9092)], partitions = [Partition(topic = Topic1, partition = 
> 0, leader = 1, replicas = [1,], isr = [1,]])
> 2016-02-26T12:25:37,856 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Issuing group metadata request to broker 1
> 2016-02-26T12:25:37,857 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Group metadata response 
> ClientResponse(receivedTimeMs=1456489537857, disconnected=false, 
> request=ClientRequest(expectResponse=true, 
> callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@28edb273,
>  
> request=RequestSend(header={api_key=10,api_version=0,correlation_id=21906,client_id=consumer-1},
>  body={group_id=}), createdTimeMs=1456489537856, sendTimeMs=1456489537856), 
> responseBody={error_code=15,coordinator={node_id=-1,host=,port=-1}})
> 2016-02-26T12:25:37,956 | DEBUG | o.a.k.c.NetworkClient [pool-10-thread-1] | 
> Sending metadata request ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=21907,client_id=consumer-1},
>  body={topics=[Topic1]}), isInitiatedByNetworkClient, 
> createdTimeMs=1456489537956, sendTimeMs=0) to node 1
> 2016-02-26T12:25:37,956 | DEBUG | o.a.k.c.Metadata [pool-10-thread-1] | 
> Updated cluster metadata version 10955 to Cluster(nodes = [Node(1, 
> server.internal, 9092)], partitions = [Partition(topic = Topic1, partition = 
> 0, leader = 1, replicas = [1,], isr = [1,]])
> 2016-02-26T12:25:37,956 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Issuing group metadata request to broker 1
> 2016-02-26T12:25:37,957 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Group metadata response 
> ClientResponse(receivedTimeMs=1456489537957, disconnected=false, 
> request=ClientRequest(expectResponse=true, 
> callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@40cee8cc,
>  
> request=RequestSend(header={api_key=10,api_version=0,correlation_id=21908,client_id=consumer-1},
>  body={group_id=}), createdTimeMs=1456489537956, sendTimeMs=1456489537956), 
> responseBody={error_code=15,coordinator={node_id=-1,host=,port=-1}})
> 2016-02-26T12:25:38,056 | DEBUG | o.a.k.c.NetworkClient [pool-10-thread-1] | 
> Sending metadata request ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=21909,client_id=consumer-1},
>  body={topics=[Topic1]}), isInitiatedByNetworkClient, 
> createdTimeMs=1456489538056, sendTimeMs=0) to node 1
> 2016-02-26T12:25:38,056 | DEBUG | o.a.k.c.Metadata [pool-10-thread-1] | 
> Updated cluster metadata version 10956 to Cluster(nodes = [Node(1, 
> server.internal, 9092)], partitions = [Partition(topic = Topic1, partition = 
> 0, leader = 1, replicas = [1,], isr = [1,]])
> 2016-02-26T12:25:38,056 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Issuing group metadata request to broker 1
> 2016-02-26T12:25:38,057 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Group metadata response 
> ClientResponse(receivedTimeMs=1456489538057, disconnected=false, 
> request=ClientRequest(expectResponse=true, 
> callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@439e25fb,
>  
> 

Re: [DISCUSS] KIP-83 - Allow multiple SASL PLAIN authenticated Java clients in a single JVM process

2016-09-28 Thread Rajini Sivaram
Edo,

I was going to write a KIP for this next week :-) I was thinking along the
same lines as Harsha, i.e., enable multiple LoginManagers to co-exist in a
JVM. The multi-user login module in MessageHub made sense at the time to
overcome the limitation in Kafka, without changing Kafka itself. But for
the KIP, it would be better to have a solution that supports multiple users
for any SASL mechanism.


On Wed, Sep 28, 2016 at 5:57 AM, Harsha Chintalapani 
wrote:

> Edorado,
> Thanks for the KIP. As pointed out in the JIRA can you make
> sure this is not just a specific change for SASL plain but make changes in
> general to LoginManager such that its not a singleton.
>
> Thanks,
> Harsha
>
> On Tue, Sep 27, 2016 at 10:15 AM Edoardo Comar  wrote:
>
> > Hi,
> > I had a go at a KIP that addresses this JIRA
> > https://issues.apache.org/jira/browse/KAFKA-4180
> > "Shared authentification with multiple actives Kafka producers/consumers"
> >
> > which is a limitation of the current Java client that we (IBM MessageHub)
> > get asked quite often lately.
> >
> > We will have a go at a PR soon, just as a proof of concept, but as it
> > introduces new public interfaces it needs a KIP.
> >
> > I'll welcome your input.
> >
> > Edo
> > --
> > Edoardo Comar
> > MQ Cloud Technologies
> > eco...@uk.ibm.com
> > +44 (0)1962 81 5576
> > IBM UK Ltd, Hursley Park, SO21 2JN
> >
> > IBM United Kingdom Limited Registered in England and Wales with number
> > 741598 Registered office: PO Box 41, North Harbour, Portsmouth, Hants.
> PO6
> > 3AU
> > Unless stated otherwise above:
> > IBM United Kingdom Limited - Registered in England and Wales with number
> > 741598.
> > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6
> 3AU
> >
>



-- 
Regards,

Rajini


[jira] [Commented] (KAFKA-4074) Deleting a topic can make it unavailable even if delete.topic.enable is false

2016-09-28 Thread Manikumar Reddy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15528682#comment-15528682
 ] 

Manikumar Reddy commented on KAFKA-4074:


[~hachikuji] Yes,  resolving this as duplicate 

> Deleting a topic can make it unavailable even if delete.topic.enable is false
> -
>
> Key: KAFKA-4074
> URL: https://issues.apache.org/jira/browse/KAFKA-4074
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Reporter: Joel Koshy
>Assignee: Manikumar Reddy
> Fix For: 0.10.1.0
>
>
> The {{delete.topic.enable}} configuration does not completely block the 
> effects of delete topic since the controller may (indirectly) query the list 
> of topics under the delete-topic znode.
> To reproduce:
> * Delete topic X
> * Force a controller move (either by bouncing or removing the controller 
> znode)
> * The new controller will send out UpdateMetadataRequests with leader=-2 for 
> the partitions of X
> * Producers eventually stop producing to that topic
> The reason for this is that when ControllerChannelManager adds 
> UpdateMetadataRequests for brokers, we directly use the partitionsToBeDeleted 
> field of the DeleteTopicManager (which is set to the partitions of the topics 
> under the delete-topic znode on controller startup).
> In order to get out of the situation you have to remove X from the znode and 
> then force another controller move.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-4074) Deleting a topic can make it unavailable even if delete.topic.enable is false

2016-09-28 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy resolved KAFKA-4074.

Resolution: Duplicate

> Deleting a topic can make it unavailable even if delete.topic.enable is false
> -
>
> Key: KAFKA-4074
> URL: https://issues.apache.org/jira/browse/KAFKA-4074
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Reporter: Joel Koshy
>Assignee: Manikumar Reddy
> Fix For: 0.10.1.0
>
>
> The {{delete.topic.enable}} configuration does not completely block the 
> effects of delete topic since the controller may (indirectly) query the list 
> of topics under the delete-topic znode.
> To reproduce:
> * Delete topic X
> * Force a controller move (either by bouncing or removing the controller 
> znode)
> * The new controller will send out UpdateMetadataRequests with leader=-2 for 
> the partitions of X
> * Producers eventually stop producing to that topic
> The reason for this is that when ControllerChannelManager adds 
> UpdateMetadataRequests for brokers, we directly use the partitionsToBeDeleted 
> field of the DeleteTopicManager (which is set to the partitions of the topics 
> under the delete-topic znode on controller startup).
> In order to get out of the situation you have to remove X from the znode and 
> then force another controller move.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[DISCUSS] KIP-80: Kafka REST Server

2016-09-28 Thread Manikumar
Hi Kafka Devs,

I created KIP-80 to add Kafka REST Server to Kafka Repository.

There are already open-source alternatives are available.  But we would
like to add REST server that
many users ask for under Apache Kafka repo. Many data Infra tools comes up
with Rest Interface.
It is useful to have inbuilt Rest API support for Produce, Consume messages
and admin interface for
integrating with external management and provisioning tools.This will also
allow the maintenance of
REST server and adding new features makes it easy because apache community.

The KIP wiki is the following:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-
80%3A+Kafka+Rest+Server

Your comments and feedback are welcome.

Thanks,
Manikumar


[jira] [Assigned] (KAFKA-1954) Speed Up The Unit Tests

2016-09-28 Thread Balint Molnar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Balint Molnar reassigned KAFKA-1954:


Assignee: Balint Molnar  (was: Sriharsha Chintalapani)

> Speed Up The Unit Tests
> ---
>
> Key: KAFKA-1954
> URL: https://issues.apache.org/jira/browse/KAFKA-1954
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jay Kreps
>Assignee: Balint Molnar
>  Labels: newbie++
> Attachments: KAFKA-1954.patch
>
>
> The server unit tests are pretty slow. They take about 8m40s on my machine. 
> Combined with slow scala compile time this is kind of painful.
> Almost all of this time comes from the integration tests which start one or 
> more brokers and then shut them down.
> Our finding has been that these integration tests are actually quite useful 
> so we probably can't just get rid of them.
> Here are some times:
> Zk startup: 100ms
> Kafka server startup: 600ms
> Kafka server shutdown: 500ms
>  
> So you can see that an integration test suite with 10 tests that starts and 
> stops a 3 node cluster for each test will take ~34 seconds even if the tests 
> themselves are instantaneous.
> I think the best solution to this is to get the test harness classes in shape 
> and then performance tune them a bit as this would potentially speed 
> everything up. There are several test harness classes:
> - ZooKeeperTestHarness
> - KafkaServerTestHarness
> - ProducerConsumerTestHarness
> - IntegrationTestHarness (similar to ProducerConsumerTestHarness but using 
> new clients)
> Unfortunately often tests don't use the right harness, they often use a 
> lower-level harness than they should and manually create stuff. Usually the 
> cause of this is that the harness is missing some feature.
> I think the right thing to do here is
> 1. Get the tests converted to the best possible harness. If you are testing 
> producers and consumers then you should use the harness that creates all that 
> and shuts it down for you.
> 2. Optimize the harnesses to be faster.
> How can we optimize the harnesses? I'm not sure, I would solicit ideas. Here 
> are a few:
> 1. It's worth analyzing the logging to see what is taking up time in the 
> startup and shutdown.
> 2. There may be things like controlled shutdown that we can disable (since we 
> are anyway going to discard the brokers after shutdown.
> 3. The harnesses could probably start all the servers and all the clients in 
> parallel.
> 4. We maybe able to tune down the resource usage in the server config for 
> test cases a bit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #920

2016-09-28 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] HOTFIX: fix npe in StreamsMetadataState when onChange has not been

[wangguoz] MINOR: fixes a few error logging formats

[wangguoz] MINOR: add test to make sure ProcessorStateManager can handle State

--
[...truncated 12014 lines...]
1 warning
:core:processResources UP-TO-DATE
:core:classes
:core:copyDependantLibs
:core:jar
:examples:compileJavawarning: [options] bootstrap class path not set in 
conjunction with -source 1.7
1 warning

:examples:processResources UP-TO-DATE
:examples:classes
:examples:checkstyleMain
:examples:compileTestJava UP-TO-DATE
:examples:processTestResources UP-TO-DATE
:examples:testClasses UP-TO-DATE
:examples:checkstyleTest UP-TO-DATE
:examples:test UP-TO-DATE
:log4j-appender:compileJavawarning: [options] bootstrap class path not set in 
conjunction with -source 1.7
1 warning

:log4j-appender:processResources UP-TO-DATE
:log4j-appender:classes
:log4j-appender:checkstyleMain
:log4j-appender:compileTestJavawarning: [options] bootstrap class path not set 
in conjunction with -source 1.7
1 warning

:log4j-appender:processTestResources UP-TO-DATE
:log4j-appender:testClasses
:log4j-appender:checkstyleTest
:log4j-appender:test

org.apache.kafka.log4jappender.KafkaLog4jAppenderTest > testLog4jAppends STARTED

org.apache.kafka.log4jappender.KafkaLog4jAppenderTest > testLog4jAppends PASSED

org.apache.kafka.log4jappender.KafkaLog4jAppenderTest > testKafkaLog4jConfigs 
STARTED

org.apache.kafka.log4jappender.KafkaLog4jAppenderTest > testKafkaLog4jConfigs 
PASSED
:core:compileTestJava UP-TO-DATE
:core:compileTestScala
:186:
 constructor ListOffsetRequest in class ListOffsetRequest is deprecated: see 
corresponding Javadoc for more information.
new requests.ListOffsetRequest(Map(tp -> new 
ListOffsetRequest.PartitionData(0, 100)).asJava)
^
:186:
 class PartitionData in object ListOffsetRequest is deprecated: see 
corresponding Javadoc for more information.
new requests.ListOffsetRequest(Map(tp -> new 
ListOffsetRequest.PartitionData(0, 100)).asJava)
   ^
:88:
 method createAndShutdownStep in class MetricsTest is deprecated: This test has 
been deprecated and it will be removed in a future release
createAndShutdownStep("group0", "consumer0", "producer0")
^
:158:
 constructor FetchRequest in class FetchRequest is deprecated: see 
corresponding Javadoc for more information.
val fetchRequest = new FetchRequest(Int.MaxValue, 0, 
createPartitionMap(maxPartitionBytes, Seq(topicPartition)))
   ^
four warnings found
:core:processTestResources UP-TO-DATE
:core:testClasses
:connect:api:compileJavawarning: [options] bootstrap class path not set in 
conjunction with -source 1.7
1 warning

:connect:api:processResources UP-TO-DATE
:connect:api:classes
:connect:api:copyDependantLibs
:connect:api:jar
:connect:json:compileJavawarning: [options] bootstrap class path not set in 
conjunction with -source 1.7
1 warning

:connect:json:processResources UP-TO-DATE
:connect:json:classes
:connect:json:copyDependantLibs
:connect:json:jar
:streams:compileJavawarning: [options] bootstrap class path not set in 
conjunction with -source 1.7
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
1 warning

:streams:processResources UP-TO-DATE
:streams:classes
:streams:checkstyleMain
:streams:compileTestJavawarning: [options] bootstrap class path not set in 
conjunction with -source 1.7
:203:
 warning: non-varargs call of varargs method with inexact argument type for 
last parameter;
testStream.branch(null);
  ^
  cast to Predicate for a varargs call
  cast to Predicate[] for a non-varargs call and to suppress 
this warning
Note: 

 uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
2 warnings

:streams:processTestResources
:streams:testClasses
:streams:checkstyleTest
:streams:test