[jira] [Commented] (KAFKA-3896) Unstable test KStreamRepartitionJoinTest.shouldCorrectlyRepartitionOnJoinOperations

2017-01-19 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15831285#comment-15831285
 ] 

Guozhang Wang commented on KAFKA-3896:
--

The root cause of this issue is that before 
https://github.com/apache/kafka/pull/2389, we use delete-and-recreate with 
{{StreamsKafkaClient}}, and due to a bad design pattern, we are creating and 
deleting topics one-at-a-time, for this test, there are 12 topics to be 
created, and each creating call will need to be coupled with a delete call with 
empty topic list as always, as a result it is possible that the consumer could 
time out during the assignment in rebalance, and the next leader has to do the 
same again because of "makeReady" calls are one-at-a-time.

PR https://github.com/apache/kafka/pull/2389 remedies this problem as we are 
not calling delete any more and that is why we have not seen this issue after 
this PR. However we still need to make the {{InternalTopicManager}} more 
efficient.

> Unstable test 
> KStreamRepartitionJoinTest.shouldCorrectlyRepartitionOnJoinOperations
> ---
>
> Key: KAFKA-3896
> URL: https://issues.apache.org/jira/browse/KAFKA-3896
> Project: Kafka
>  Issue Type: Sub-task
>  Components: unit tests
>Reporter: Ashish K Singh
>Assignee: Guozhang Wang
> Fix For: 0.10.1.0
>
>
> {{KStreamRepartitionJoinTest.shouldCorrectlyRepartitionOnJoinOperations}} 
> seems to be unstable. A failure can be found 
> [here|https://builds.apache.org/job/kafka-trunk-git-pr-jdk7/4363/]. Could not 
> reproduce the test failure locally though.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4631) Refresh consumer metadata more frequently for unknown subscribed topics

2017-01-18 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15829268#comment-15829268
 ] 

Guozhang Wang commented on KAFKA-4631:
--

I think we do not need a new config for this, hard-coded values like 100ms 
looks good to me.

> Refresh consumer metadata more frequently for unknown subscribed topics
> ---
>
> Key: KAFKA-4631
> URL: https://issues.apache.org/jira/browse/KAFKA-4631
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Matthias J. Sax
>Priority: Minor
> Fix For: 0.10.2.0
>
>
> By default, the consumer refreshes metadata every 5 minutes. In testing, it 
> can often happen that a topic is created at about the same time that the 
> consumer is started. In the worst case, creation finishes after the consumer 
> fetches metadata, and the test must wait 5 minutes for the consumer to 
> refresh metadata in order to discover the topic. To address this problem, 
> users can decrease the metadata refresh interval, but this means more 
> frequent refreshes even after all topics are known. An improvement would be 
> to internally let the consumer fetch metadata more frequently when the 
> consumer encounters unknown topics. Perhaps every 5-10 seconds would be 
> reasonable, for example.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4222) Transient failure in QueryableStateIntegrationTest.queryOnRebalance

2017-01-18 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15829117#comment-15829117
 ] 

Guozhang Wang commented on KAFKA-4222:
--

[~mjsax] Found this failure again after the previous PR is merged, so I think 
it still needs to be investigated: 
https://builds.apache.org/job/kafka-pr-jdk7-scala2.10/991/testReport/junit/org.apache.kafka.streams.integration/QueryableStateIntegrationTest/queryOnRebalance_1_/

{code}
Error Message

java.lang.AssertionError: Condition not met within timeout 3. waiting for 
metadata, store and value to be non null
Stacktrace

java.lang.AssertionError: Condition not met within timeout 3. waiting for 
metadata, store and value to be non null
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:259)
at 
org.apache.kafka.streams.integration.QueryableStateIntegrationTest.verifyAllKVKeys(QueryableStateIntegrationTest.java:270)
at 
org.apache.kafka.streams.integration.QueryableStateIntegrationTest.queryOnRebalance(QueryableStateIntegrationTest.java:352)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.junit.runners.Suite.runChild(Suite.java:128)
at org.junit.runners.Suite.runChild(Suite.java:27)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
  

[jira] [Updated] (KAFKA-3738) Add load system tests for Streams

2017-01-17 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-3738:
-
Issue Type: Task  (was: Bug)

> Add load system tests for Streams
> -
>
> Key: KAFKA-3738
> URL: https://issues.apache.org/jira/browse/KAFKA-3738
> Project: Kafka
>  Issue Type: Task
>  Components: streams
>Reporter: Guozhang Wang
>  Labels: test
>
> Today in system tests we have mainly two types of tests: 1) correctness tests 
> for delivery guarantees with some "chaos monkey" mechanism; 2) performance 
> tests for evaluating single-node efficiency.
> We want to consider adding another type of system tests called "load tests" 
> for Streams, in which we can have a large scale settings of a Streams app and 
> let it run under heavy load for some time, and measure CPU / memory / disk / 
> etc to check if it behaves properly under load.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3738) Add load system tests for Streams

2017-01-17 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-3738:
-
Description: 
Today in system tests we have mainly two types of tests: 1) correctness tests 
for delivery guarantees with some "chaos monkey" mechanism; 2) performance 
tests for evaluating single-node efficiency.

We want to consider adding another type of system tests called "load tests" for 
Streams, in which we can have a large scale settings of a Streams app and let 
it run under heavy load for some time, and measure CPU / memory / disk / etc to 
check if it behaves properly under load.

  was:Since Streams has external dependences that are originated from C++, it 
is more likely to have memory leaks. We should consider adding a system test 
for validating object leaks.


> Add load system tests for Streams
> -
>
> Key: KAFKA-3738
> URL: https://issues.apache.org/jira/browse/KAFKA-3738
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>  Labels: test
>
> Today in system tests we have mainly two types of tests: 1) correctness tests 
> for delivery guarantees with some "chaos monkey" mechanism; 2) performance 
> tests for evaluating single-node efficiency.
> We want to consider adding another type of system tests called "load tests" 
> for Streams, in which we can have a large scale settings of a Streams app and 
> let it run under heavy load for some time, and measure CPU / memory / disk / 
> etc to check if it behaves properly under load.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3738) Add load system tests for Streams

2017-01-17 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-3738:
-
Summary: Add load system tests for Streams  (was: Add system test against 
memory leak in Kafka Streams)

> Add load system tests for Streams
> -
>
> Key: KAFKA-3738
> URL: https://issues.apache.org/jira/browse/KAFKA-3738
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>  Labels: test
>
> Since Streams has external dependences that are originated from C++, it is 
> more likely to have memory leaks. We should consider adding a system test for 
> validating object leaks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4588) QueryableStateIntegrationTest.shouldNotMakeStoreAvailableUntilAllStoresAvailable is occasionally failing on jenkins

2017-01-17 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-4588:
-
   Resolution: Fixed
Fix Version/s: 0.10.3.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 2371
[https://github.com/apache/kafka/pull/2371]

> QueryableStateIntegrationTest.shouldNotMakeStoreAvailableUntilAllStoresAvailable
>  is occasionally failing on jenkins
> ---
>
> Key: KAFKA-4588
> URL: https://issues.apache.org/jira/browse/KAFKA-4588
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Damian Guy
>Assignee: Damian Guy
> Fix For: 0.10.3.0, 0.10.2.0
>
>
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
> shouldNotMakeStoreAvailableUntilAllStoresAvailable[1] FAILED
> java.lang.AssertionError: Condition not met within timeout 3. waiting 
> for store count-by-key
> at 
> org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:259)
> at 
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest.shouldNotMakeStoreAvailableUntilAllStoresAvailable(QueryableStateIntegrationTest.java:502)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3896) Unstable test KStreamRepartitionJoinTest.shouldCorrectlyRepartitionOnJoinOperations

2017-01-17 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15826752#comment-15826752
 ] 

Guozhang Wang commented on KAFKA-3896:
--

Another instance: 
https://builds.apache.org/job/kafka-pr-jdk7-scala2.10/929/testReport/junit/org.apache.kafka.streams.integration/KStreamRepartitionJoinTest/shouldCorrectlyRepartitionOnJoinOperations_1_/

{code}
Stacktrace

java.lang.AssertionError: Condition not met within timeout 6. Did not 
receive 5 number of records
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:259)
at 
org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinValuesRecordsReceived(IntegrationTestUtils.java:259)
at 
org.apache.kafka.streams.integration.KStreamRepartitionJoinTest.receiveMessages(KStreamRepartitionJoinTest.java:381)
at 
org.apache.kafka.streams.integration.KStreamRepartitionJoinTest.verifyCorrectOutput(KStreamRepartitionJoinTest.java:291)
at 
org.apache.kafka.streams.integration.KStreamRepartitionJoinTest.shouldCorrectlyRepartitionOnJoinOperations(KStreamRepartitionJoinTest.java:139)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.junit.runners.Suite.runChild(Suite.java:128)
at org.junit.runners.Suite.runChild(Suite.java:27)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   

[jira] [Updated] (KAFKA-3896) Unstable test KStreamRepartitionJoinTest.shouldCorrectlyRepartitionOnJoinOperations

2017-01-17 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-3896:
-
Assignee: Guozhang Wang  (was: Damian Guy)

> Unstable test 
> KStreamRepartitionJoinTest.shouldCorrectlyRepartitionOnJoinOperations
> ---
>
> Key: KAFKA-3896
> URL: https://issues.apache.org/jira/browse/KAFKA-3896
> Project: Kafka
>  Issue Type: Sub-task
>  Components: unit tests
>Reporter: Ashish K Singh
>Assignee: Guozhang Wang
> Fix For: 0.10.1.0
>
>
> {{KStreamRepartitionJoinTest.shouldCorrectlyRepartitionOnJoinOperations}} 
> seems to be unstable. A failure can be found 
> [here|https://builds.apache.org/job/kafka-trunk-git-pr-jdk7/4363/]. Could not 
> reproduce the test failure locally though.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4639) Kafka Streams metrics are undocumented

2017-01-16 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-4639:
-
Assignee: Eno Thereska

> Kafka Streams metrics are undocumented
> --
>
> Key: KAFKA-4639
> URL: https://issues.apache.org/jira/browse/KAFKA-4639
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.1.1
>Reporter: Elias Levy
>Assignee: Eno Thereska
>Priority: Minor
>
> The documentation is silent on the metrics collected by the Streams framework.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4182) Move the change logger out of RocksDB stores

2017-01-16 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-4182:
-
Summary: Move the change logger out of RocksDB stores  (was: Move the 
change logger our of RocksDB stores)

> Move the change logger out of RocksDB stores
> 
>
> Key: KAFKA-4182
> URL: https://issues.apache.org/jira/browse/KAFKA-4182
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Damian Guy
>  Labels: performance
>
> We currently have the change logger embedded within the RocksDB store 
> implementations, however this results in multiple implementations of the same 
> thing and bad separation of concerns. We should create new LoggedStore that 
> wraps the outer most store when logging is enabled, for example:
> loggedStore -> cachingStore -> meteredStore -> innerStore



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4421) Update release process so that Scala 2.12 artifacts are published

2017-01-16 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824512#comment-15824512
 ] 

Guozhang Wang commented on KAFKA-4421:
--

[~ijuma] Ah yes, I did. This needs to be written down as well.

> Update release process so that Scala 2.12 artifacts are published
> -
>
> Key: KAFKA-4421
> URL: https://issues.apache.org/jira/browse/KAFKA-4421
> Project: Kafka
>  Issue Type: Sub-task
>  Components: build
>Reporter: Ismael Juma
> Fix For: 0.10.2.0
>
>
> Since Scala 2.12 requires Java 8 while Kafka still supports Java 7, the *All 
> commands don't include Scala 2.12. As such, simply running releaseTarGzAll 
> won't generate the Scala 2.12 artifacts and we also need to run `./gradlew 
> releaseTagGz -PscalaVersion=2.12`.
> The following page needs to be updated to include this and any other change 
> required:
> https://cwiki.apache.org/confluence/display/KAFKA/Release+Process



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4421) Update release process so that Scala 2.12 artifacts are published

2017-01-16 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824470#comment-15824470
 ] 

Guozhang Wang commented on KAFKA-4421:
--

I added a one liner note on the release process wiki while doing 0.10.1.1:

{code}
NOTE: as of 0.10.1 Kafka builds do not include Scala 2.12 yet, if you want to 
specifically create the artifacts for Scala 2.12, use  ./gradlew releaseTarGz 
-PscalaVersion=2.12.1
{code}

But this may need to be updated as an enforced step in the future if we want to 
officially include 2.12 artifacts.

> Update release process so that Scala 2.12 artifacts are published
> -
>
> Key: KAFKA-4421
> URL: https://issues.apache.org/jira/browse/KAFKA-4421
> Project: Kafka
>  Issue Type: Sub-task
>  Components: build
>Reporter: Ismael Juma
> Fix For: 0.10.2.0
>
>
> Since Scala 2.12 requires Java 8 while Kafka still supports Java 7, the *All 
> commands don't include Scala 2.12. As such, simply running releaseTarGzAll 
> won't generate the Scala 2.12 artifacts and we also need to run `./gradlew 
> releaseTagGz -PscalaVersion=2.12`.
> The following page needs to be updated to include this and any other change 
> required:
> https://cwiki.apache.org/confluence/display/KAFKA/Release+Process



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-4604) Gradle Eclipse plugin creates projects for non project subfolders

2017-01-16 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-4604.
--
Resolution: Duplicate

> Gradle Eclipse plugin creates projects for non project subfolders
> -
>
> Key: KAFKA-4604
> URL: https://issues.apache.org/jira/browse/KAFKA-4604
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.11.0.0
> Environment: Tried with Gradle 3.2.1 and Eclipse Neon
>Reporter: Dhwani Katagade
>Assignee: Dhwani Katagade
>Priority: Minor
>  Labels: build, easyfix, patch
>
> Running the command *./gradlew eclipse* generates .project and .classpath 
> files for all projects. But it also generates these files for the root 
> project folder and the connect subfolder that holds the 4 connector projects 
> even though these folders are not actual project folders.
> The unnecessary connect project is benign, but the unnecessary kafka project 
> created for the root folder has a side effect. The root folder has a bin 
> directory that holds some scripts. When a _Clean all projects_ is done in 
> Eclipse, it cleans up the scripts in the bin directory. These have to be 
> restored by running *git checkout \-\- *. This same could become a 
> problem for the connect project as well if tomorrow we place some files under 
> connect/bin.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4619) Dissallow to output records with unknown keys in TransformValues

2017-01-16 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-4619:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 2346
[https://github.com/apache/kafka/pull/2346]

> Dissallow to output records with unknown keys in TransformValues
> 
>
> Key: KAFKA-4619
> URL: https://issues.apache.org/jira/browse/KAFKA-4619
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.1.0, 0.10.0.0, 0.10.0.1, 0.10.1.1
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
> Fix For: 0.10.2.0
>
>
> {{KStream#transformValues}} allows the user to return a new value in 
> {{punctuate}} and it also allows the user to return any new key value pair 
> using {{ProcesserContext#forward}}. For {{punctuate}} the key gets set to 
> {{null}} under the hood and for {{forward}} the user can put any new key they 
> want. However, Kafka Streams assumes that using {{transformValue}} does not 
> change the key -- thus, this assumption might not hold right now resulting 
> potentially incorrectly partitioned data.
> Thus, it should not be possible to return any data in {{punctuate}} and 
> {{forward}} and we should raise an exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (KAFKA-3502) Build is killed during kafka streams tests due to `pure virtual method called` error

2017-01-14 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-3502 started by Guozhang Wang.

> Build is killed during kafka streams tests due to `pure virtual method 
> called` error
> 
>
> Key: KAFKA-3502
> URL: https://issues.apache.org/jira/browse/KAFKA-3502
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ashish K Singh
>Assignee: Guozhang Wang
>  Labels: transient-unit-test-failure
>
> Build failed due to failure in streams' test. Not clear which test led to 
> this.
> Jenkins console: 
> https://builds.apache.org/job/kafka-trunk-git-pr-jdk7/3210/console
> {code}
> org.apache.kafka.streams.kstream.internals.KTableFilterTest > testValueGetter 
> PASSED
> org.apache.kafka.streams.kstream.internals.KStreamFlatMapTest > testFlatMap 
> PASSED
> org.apache.kafka.streams.kstream.internals.KTableAggregateTest > testAggBasic 
> PASSED
> org.apache.kafka.streams.kstream.internals.KStreamFlatMapValuesTest > 
> testFlatMapValues PASSED
> org.apache.kafka.streams.kstream.KStreamBuilderTest > testMerge PASSED
> org.apache.kafka.streams.kstream.KStreamBuilderTest > testFrom PASSED
> org.apache.kafka.streams.kstream.KStreamBuilderTest > testNewName PASSED
> pure virtual method called
> terminate called without an active exception
> :streams:test FAILED
> FAILURE: Build failed with an exception.
> * What went wrong:
> Execution failed for task ':streams:test'.
> > Process 'Gradle Test Executor 4' finished with non-zero exit value 134
> {code}
> Tried reproducing the issue locally, but could not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3901) KStreamTransformValues$KStreamTransformValuesProcessor#process() forwards null values

2017-01-14 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-3901.
--
Resolution: Won't Fix

> KStreamTransformValues$KStreamTransformValuesProcessor#process() forwards 
> null values
> -
>
> Key: KAFKA-3901
> URL: https://issues.apache.org/jira/browse/KAFKA-3901
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.0.0
>Reporter: Dmitry Minkovsky
>Assignee: Guozhang Wang
>Priority: Minor
>
> Did this get missed in [KAFKA-3519: Refactor Transformer's transform / 
> punctuate to return nullable 
> value|https://github.com/apache/kafka/commit/40fd456649b5df29d030da46865b5e7e0ca6db15#diff-338c230fd5a15d98550230007651a224]?
>  I think it may have, because that processor's #punctuate() does not forward 
> null. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3515) Migrate org.apache.kafka.connect.json.JsonSerializer / Deser to common package

2017-01-14 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-3515:
-
Labels: needs-kip  (was: )

> Migrate org.apache.kafka.connect.json.JsonSerializer / Deser to common 
> package 
> ---
>
> Key: KAFKA-3515
> URL: https://issues.apache.org/jira/browse/KAFKA-3515
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>  Labels: needs-kip
> Fix For: 0.10.2.0
>
>
> We have these two classes in org.apache.kafka.connect.json but they should 
> really be in o.a.k.common.serialization. To maintain backward compatibility 
> we need to duplicate it for now and mark the ones in connect as deprecated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-4125) Provide low-level Processor API meta data in DSL layer

2017-01-14 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15822837#comment-15822837
 ] 

Guozhang Wang edited comment on KAFKA-4125 at 1/14/17 3:38 PM:
---

Thanks for bringing this up [~mjsax]. I do agree that we have multiple API 
discussions spread in different places now. Maybe you can create a wiki in AK 
summarizing all those proposals and we can then discuss them together and 
perhaps propose a single KIP wrapping them all? Current ones I can think of are:

1. Rich functions.
2. Add {{key}} to {{mapValues}} / {{transformValues}}.
3. Add {{flatTransform}} and {{flatTransformValues}}.
4. Separate {{RecordContext}} from {{ProcessorContext}}.
5. Deprecate not user-facing functions from {{TopologyBuilder}}.
6. Remove {{loggingEnabled}} parameter in {{ProcessorContext.register}}.
7. Remove {{disableLogging}} from {{Stores}}.
8. Change return type of {{Transfomer.punctuate()}} from {{R}} to {{null}}.


was (Author: guozhang):
Thanks for bringing this up [~mjsax]. I do agree that we have multiple API 
discussions spread in different places now. Maybe you can create a wiki in AK 
summarizing all those proposals and we can then discuss them together and 
perhaps propose a single KIP wrapping them all? Current ones I can think of are:

1. Rich functions.
2. Add {{key}} to {{mapValues}} / {{transformValues}}.
3. Add {{flatTransform}} and {{flatTransformValues}}.
4. Separate {{RecordContext}} from {{ProcessorContext}}.
5. Deprecate not user-facing functions from {{TopologyBuilder}}.
6. Remove {{loggingEnabled}} parameter in {{ProcessorContext.register}}.
7. Remove {{disableLogging}} from {{Stores}}.


> Provide low-level Processor API meta data in DSL layer
> --
>
> Key: KAFKA-4125
> URL: https://issues.apache.org/jira/browse/KAFKA-4125
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Bill Bejeck
>Priority: Minor
>
> For Processor API, user can get meta data like record offset, timestamp etc 
> via the provided {{Context}} object. It might be useful to allow uses to 
> access this information in DSL layer, too.
> The idea would be, to do it "the Flink way", ie, by providing
> RichFunctions; {{mapValue()}} for example.
> Is takes a {{ValueMapper}} that only has method
> {noformat}
> V2 apply(V1 value);
> {noformat}
> Thus, you cannot get any meta data within apply (it's completely "blind").
> We would add two more interfaces: {{RichFunction}} with a method
> {{open(Context context)}} and
> {noformat}
> RichValueMapper extends ValueMapper, RichFunction
> {noformat}
> This way, the user can chose to implement Rich- or Standard-function and
> we do not need to change existing APIs. Both can be handed into
> {{KStream.mapValues()}} for example. Internally, we check if a Rich
> function is provided, and if yes, hand in the {{Context}} object once, to
> make it available to the user who can now access it within {{apply()}} -- or
> course, the user must set a member variable in {{open()}} to hold the
> reference to the Context object.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4125) Provide low-level Processor API meta data in DSL layer

2017-01-14 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15822837#comment-15822837
 ] 

Guozhang Wang commented on KAFKA-4125:
--

Thanks for bringing this up [~mjsax]. I do agree that we have multiple API 
discussions spread in different places now. Maybe you can create a wiki in AK 
summarizing all those proposals and we can then discuss them together and 
perhaps propose a single KIP wrapping them all? Current ones I can think of are:

1. Rich functions.
2. Add {{key}} to {{mapValues}} / {{transformValues}}.
3. Add {{flatTransform}} and {{flatTransformValues}}.
4. Separate {{RecordContext}} from {{ProcessorContext}}.
5. Deprecate not user-facing functions from {{TopologyBuilder}}.
6. Remove {{loggingEnabled}} parameter in {{ProcessorContext.register}}.
7. Remove {{disableLogging}} from {{Stores}}.


> Provide low-level Processor API meta data in DSL layer
> --
>
> Key: KAFKA-4125
> URL: https://issues.apache.org/jira/browse/KAFKA-4125
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Bill Bejeck
>Priority: Minor
>
> For Processor API, user can get meta data like record offset, timestamp etc 
> via the provided {{Context}} object. It might be useful to allow uses to 
> access this information in DSL layer, too.
> The idea would be, to do it "the Flink way", ie, by providing
> RichFunctions; {{mapValue()}} for example.
> Is takes a {{ValueMapper}} that only has method
> {noformat}
> V2 apply(V1 value);
> {noformat}
> Thus, you cannot get any meta data within apply (it's completely "blind").
> We would add two more interfaces: {{RichFunction}} with a method
> {{open(Context context)}} and
> {noformat}
> RichValueMapper extends ValueMapper, RichFunction
> {noformat}
> This way, the user can chose to implement Rich- or Standard-function and
> we do not need to change existing APIs. Both can be handed into
> {{KStream.mapValues()}} for example. Internally, we check if a Rich
> function is provided, and if yes, hand in the {{Context}} object once, to
> make it available to the user who can now access it within {{apply()}} -- or
> course, the user must set a member variable in {{open()}} to hold the
> reference to the Context object.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4633) Always use regex pattern subscription to avoid auto create topics

2017-01-13 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-4633:


 Summary: Always use regex pattern subscription to avoid auto 
create topics
 Key: KAFKA-4633
 URL: https://issues.apache.org/jira/browse/KAFKA-4633
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: Guozhang Wang
Assignee: Guozhang Wang


In {{KafkaConsumer}}, a metadata update is requested whenever 
{{subscribe(List topics ..)}} is called. And when such a metadata 
request is sent to the broker upon the first {{poll}} call, it will cause the 
broker to auto-create any topics that do not exist if the broker-side config 
{{topic.auto.create}} is turned on.

In order to work around this issue until the config is default to false and 
gradually be deprecated, we will let Streams to always use the other 
{{subscribe}} function with regex pattern, which will send the metadata request 
with empty topic list and hence won't trigger broker-side auto topic creation.

The side-effect is that the metadata response will be larger, since it contains 
all the topic infos; but since we only refresh it infrequently this will add 
negligible overhead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4614) Long GC pause harming broker performance which is caused by mmap objects created for OffsetIndex

2017-01-13 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15822254#comment-15822254
 ] 

Guozhang Wang commented on KAFKA-4614:
--

Thanks for this report and thorough analysis [~kawamuray]! 

> Long GC pause harming broker performance which is caused by mmap objects 
> created for OffsetIndex
> 
>
> Key: KAFKA-4614
> URL: https://issues.apache.org/jira/browse/KAFKA-4614
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.10.0.1
>Reporter: Yuto Kawamura
>Assignee: Yuto Kawamura
>  Labels: latency, performance
> Fix For: 0.10.2.0
>
>
> First let me clarify our system environment information as I think it's 
> important to understand this issue:
> OS: CentOS6
> Kernel version: 2.6.32-XX
> Filesystem used for data volume: XFS
> Java version: 1.8.0_66
> GC option: Kafka default(G1GC)
> Kafka version: 0.10.0.1
> h2. Phenomenon
> In our Kafka cluster, an usual response time for Produce request is about 1ms 
> for 50th percentile to 10ms for 99th percentile. All topics are configured to 
> have 3 replicas and all producers are configured {{acks=all}} so this time 
> includes replication latency.
> Sometimes we observe 99th percentile latency get increased to 100ms ~ 500ms 
> but for the most cases the time consuming part is "Remote" which means it is 
> caused by slow replication which is known to happen by various reasons(which 
> is also an issue that we're trying to improve, but out of interest within 
> this issue).
> However, we found that there are some different patterns which happens rarely 
> but stationary 3 ~ 5 times a day for each servers. The interesting part is 
> that "RequestQueue" also got increased as well as "Total" and "Remote".
> At the same time, we observed that the disk read metrics(in terms of both 
> read bytes and read time) spikes exactly for the same moment. Currently we 
> have only caught up consumers so this metric sticks to zero while all 
> requests are served by page cache.
> In order to investigate what Kafka is "read"ing, I employed SystemTap and 
> wrote the following script. It traces all disk reads(only actual read by 
> physical device) made for the data volume by broker process.
> {code}
> global target_pid = KAFKA_PID
> global target_dev = DATA_VOLUME
> probe ioblock.request {
>   if (rw == BIO_READ && pid() == target_pid && devname == target_dev) {
>  t_ms = gettimeofday_ms() + 9 * 3600 * 1000 // timezone adjustment
>  printf("%s,%03d:  tid = %d, device = %s, inode = %d, size = %d\n", 
> ctime(t_ms / 1000), t_ms % 1000, tid(), devname, ino, size)
>  print_backtrace()
>  print_ubacktrace()
>   }
> }
> {code}
> As the result, we could observe many logs like below:
> {code}
> Thu Dec 22 17:21:39 2016,209:  tid = 126123, device = sdb1, inode = -1, size 
> = 4096
>  0x81275050 : generic_make_request+0x0/0x5a0 [kernel]
>  0x81275660 : submit_bio+0x70/0x120 [kernel]
>  0xa036bcaa : _xfs_buf_ioapply+0x16a/0x200 [xfs]
>  0xa036d95f : xfs_buf_iorequest+0x4f/0xe0 [xfs]
>  0xa036db46 : _xfs_buf_read+0x36/0x60 [xfs]
>  0xa036dc1b : xfs_buf_read+0xab/0x100 [xfs]
>  0xa0363477 : xfs_trans_read_buf+0x1f7/0x410 [xfs]
>  0xa033014e : xfs_btree_read_buf_block+0x5e/0xd0 [xfs]
>  0xa0330854 : xfs_btree_lookup_get_block+0x84/0xf0 [xfs]
>  0xa0330edf : xfs_btree_lookup+0xbf/0x470 [xfs]
>  0xa032456f : xfs_bmbt_lookup_eq+0x1f/0x30 [xfs]
>  0xa032628b : xfs_bmap_del_extent+0x12b/0xac0 [xfs]
>  0xa0326f34 : xfs_bunmapi+0x314/0x850 [xfs]
>  0xa034ad79 : xfs_itruncate_extents+0xe9/0x280 [xfs]
>  0xa0366de5 : xfs_inactive+0x2f5/0x450 [xfs]
>  0xa0374620 : xfs_fs_clear_inode+0xa0/0xd0 [xfs]
>  0x811affbc : clear_inode+0xac/0x140 [kernel]
>  0x811b0776 : generic_delete_inode+0x196/0x1d0 [kernel]
>  0x811b0815 : generic_drop_inode+0x65/0x80 [kernel]
>  0x811af662 : iput+0x62/0x70 [kernel]
>  0x37ff2e5347 : munmap+0x7/0x30 [/lib64/libc-2.12.so]
>  0x7ff169ba5d47 : Java_sun_nio_ch_FileChannelImpl_unmap0+0x17/0x50 
> [/usr/jdk1.8.0_66/jre/lib/amd64/libnio.so]
>  0x7ff269a1307e
> {code}
> We took a jstack dump of the broker process and found that tid = 126123 
> corresponds to the thread which is for GC(hex(tid) == nid(Native Thread ID)):
> {code}
> $ grep 0x1ecab /tmp/jstack.dump
> "Reference Handler" #2 daemon prio=10 os_prio=0 tid=0x7ff278d0c800 
> nid=0x1ecab in Object.wait() [0x7ff17da11000]
> {code}
> In order to confirm, we enabled {{PrintGCApplicationStoppedTime}} switch and 
> confirmed that in total the time the broker paused is longer than usual, when 
> a broker 

[jira] [Resolved] (KAFKA-3739) Add no-arg constructor for library provided serdes

2017-01-13 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-3739.
--
   Resolution: Fixed
Fix Version/s: 0.10.2.0

Issue resolved by pull request 2308
[https://github.com/apache/kafka/pull/2308]

> Add no-arg constructor for library provided serdes
> --
>
> Key: KAFKA-3739
> URL: https://issues.apache.org/jira/browse/KAFKA-3739
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: huxi
>  Labels: newbie, user-experience
> Fix For: 0.10.2.0
>
>
> We need to add the no-arg constructor explicitly for those library-provided 
> serdes such as {{WindowedSerde}} that already have constructors with 
> arguments. Otherwise they cannot be used through configs which are expecting 
> to construct them via reflections with no-arg constructors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3537) Provide access to low-level Metrics in ProcessorContext

2017-01-13 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15822002#comment-15822002
 ] 

Guozhang Wang commented on KAFKA-3537:
--

[~mdcoon1] In the coming 0.10.2.0 release users can get the metrics registry 
from {{context().metrics().metrics()}}, just FYI.

> Provide access to low-level Metrics in ProcessorContext
> ---
>
> Key: KAFKA-3537
> URL: https://issues.apache.org/jira/browse/KAFKA-3537
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>Affects Versions: 0.9.0.1
>Reporter: Michael Coon
>Assignee: Eno Thereska
>Priority: Minor
>  Labels: semantics
> Fix For: 0.10.2.0
>
>
> It would be good to have access to the underlying Metrics component in 
> StreamMetrics. StreamMetrics forces a naming convention for metrics that does 
> not fit our use case for reporting. We need to be able to convert the stream 
> metrics to our own metrics formatting and it's cumbersome to extract group/op 
> names from pre-formatted strings the way they are setup in StreamMetricsImpl. 
> If there were a "metrics()" method of StreamMetrics to give me the underlying 
> Metrics object, I could register my own sensors/metrics as needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3537) Provide access to low-level Metrics in ProcessorContext

2017-01-13 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-3537.
--
Resolution: Fixed

Resolved as part of PR 1446.

> Provide access to low-level Metrics in ProcessorContext
> ---
>
> Key: KAFKA-3537
> URL: https://issues.apache.org/jira/browse/KAFKA-3537
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>Affects Versions: 0.9.0.1
>Reporter: Michael Coon
>Assignee: Eno Thereska
>Priority: Minor
>  Labels: semantics
> Fix For: 0.10.2.0
>
>
> It would be good to have access to the underlying Metrics component in 
> StreamMetrics. StreamMetrics forces a naming convention for metrics that does 
> not fit our use case for reporting. We need to be able to convert the stream 
> metrics to our own metrics formatting and it's cumbersome to extract group/op 
> names from pre-formatted strings the way they are setup in StreamMetricsImpl. 
> If there were a "metrics()" method of StreamMetrics to give me the underlying 
> Metrics object, I could register my own sensors/metrics as needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1482) Transient test failures for kafka.admin.DeleteTopicTest.testPartitionReassignmentDuringDeleteTopic

2017-01-11 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-1482:
-
Component/s: unit tests

> Transient test failures for  
> kafka.admin.DeleteTopicTest.testPartitionReassignmentDuringDeleteTopic
> ---
>
> Key: KAFKA-1482
> URL: https://issues.apache.org/jira/browse/KAFKA-1482
> Project: Kafka
>  Issue Type: Bug
>  Components: unit tests
>Reporter: Guozhang Wang
>Assignee: Sriharsha Chintalapani
>  Labels: newbie
> Fix For: 0.10.3.0
>
> Attachments: kafka_delete_topic_test.log, kafka_tests.log
>
>
> {code}
> Stacktrace
> kafka.common.TopicAlreadyMarkedForDeletionException: topic test is already 
> marked for deletion
>   at kafka.admin.AdminUtils$.deleteTopic(AdminUtils.scala:324)
>   at 
> kafka.admin.DeleteTopicTest.testPartitionReassignmentDuringDeleteTopic(DeleteTopicTest.scala:117)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
>   at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:377)
>   at 
> org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
>   at 
> org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(StoppableExecutorImpl.java:40)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> 

[jira] [Reopened] (KAFKA-1482) Transient test failures for kafka.admin.DeleteTopicTest.testPartitionReassignmentDuringDeleteTopic

2017-01-11 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang reopened KAFKA-1482:
--

> Transient test failures for  
> kafka.admin.DeleteTopicTest.testPartitionReassignmentDuringDeleteTopic
> ---
>
> Key: KAFKA-1482
> URL: https://issues.apache.org/jira/browse/KAFKA-1482
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Sriharsha Chintalapani
>  Labels: newbie
> Fix For: 0.10.3.0
>
> Attachments: kafka_delete_topic_test.log, kafka_tests.log
>
>
> {code}
> Stacktrace
> kafka.common.TopicAlreadyMarkedForDeletionException: topic test is already 
> marked for deletion
>   at kafka.admin.AdminUtils$.deleteTopic(AdminUtils.scala:324)
>   at 
> kafka.admin.DeleteTopicTest.testPartitionReassignmentDuringDeleteTopic(DeleteTopicTest.scala:117)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
>   at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:377)
>   at 
> org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
>   at 
> org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(StoppableExecutorImpl.java:40)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at 

[jira] [Updated] (KAFKA-1482) Transient test failures for kafka.admin.DeleteTopicTest.testPartitionReassignmentDuringDeleteTopic

2017-01-11 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-1482:
-
Fix Version/s: (was: 0.8.2.0)
   0.10.3.0

> Transient test failures for  
> kafka.admin.DeleteTopicTest.testPartitionReassignmentDuringDeleteTopic
> ---
>
> Key: KAFKA-1482
> URL: https://issues.apache.org/jira/browse/KAFKA-1482
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Sriharsha Chintalapani
>  Labels: newbie
> Fix For: 0.10.3.0
>
> Attachments: kafka_delete_topic_test.log, kafka_tests.log
>
>
> {code}
> Stacktrace
> kafka.common.TopicAlreadyMarkedForDeletionException: topic test is already 
> marked for deletion
>   at kafka.admin.AdminUtils$.deleteTopic(AdminUtils.scala:324)
>   at 
> kafka.admin.DeleteTopicTest.testPartitionReassignmentDuringDeleteTopic(DeleteTopicTest.scala:117)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
>   at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:377)
>   at 
> org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
>   at 
> org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(StoppableExecutorImpl.java:40)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> 

[jira] [Commented] (KAFKA-1482) Transient test failures for kafka.admin.DeleteTopicTest.testPartitionReassignmentDuringDeleteTopic

2017-01-11 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15820321#comment-15820321
 ] 

Guozhang Wang commented on KAFKA-1482:
--

Saw this test failure in another example: 
https://builds.apache.org/job/kafka-pr-jdk8-scala2.12/776/testReport/junit/kafka.admin/DeleteTopicTest/testPartitionReassignmentDuringDeleteTopic/

> Transient test failures for  
> kafka.admin.DeleteTopicTest.testPartitionReassignmentDuringDeleteTopic
> ---
>
> Key: KAFKA-1482
> URL: https://issues.apache.org/jira/browse/KAFKA-1482
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Sriharsha Chintalapani
>  Labels: newbie
> Fix For: 0.8.2.0
>
> Attachments: kafka_delete_topic_test.log, kafka_tests.log
>
>
> {code}
> Stacktrace
> kafka.common.TopicAlreadyMarkedForDeletionException: topic test is already 
> marked for deletion
>   at kafka.admin.AdminUtils$.deleteTopic(AdminUtils.scala:324)
>   at 
> kafka.admin.DeleteTopicTest.testPartitionReassignmentDuringDeleteTopic(DeleteTopicTest.scala:117)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
>   at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:377)
>   at 
> org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
>   at 
> 

[jira] [Updated] (KAFKA-1482) Transient test failures for kafka.admin.DeleteTopicTest.testPartitionReassignmentDuringDeleteTopic

2017-01-11 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-1482:
-
Summary: Transient test failures for  
kafka.admin.DeleteTopicTest.testPartitionReassignmentDuringDeleteTopic  (was: 
Transient test failures for kafka.admin.DeleteTopicTest)

> Transient test failures for  
> kafka.admin.DeleteTopicTest.testPartitionReassignmentDuringDeleteTopic
> ---
>
> Key: KAFKA-1482
> URL: https://issues.apache.org/jira/browse/KAFKA-1482
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Sriharsha Chintalapani
>  Labels: newbie
> Fix For: 0.8.2.0
>
> Attachments: kafka_delete_topic_test.log, kafka_tests.log
>
>
> {code}
> Stacktrace
> kafka.common.TopicAlreadyMarkedForDeletionException: topic test is already 
> marked for deletion
>   at kafka.admin.AdminUtils$.deleteTopic(AdminUtils.scala:324)
>   at 
> kafka.admin.DeleteTopicTest.testPartitionReassignmentDuringDeleteTopic(DeleteTopicTest.scala:117)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
>   at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:377)
>   at 
> org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
>   at 
> org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(StoppableExecutorImpl.java:40)
>   at 
> 

[jira] [Updated] (KAFKA-1482) Transient test failures for kafka.admin.DeleteTopicTest

2017-01-11 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-1482:
-
Description: 


{code}
Stacktrace

kafka.common.TopicAlreadyMarkedForDeletionException: topic test is already 
marked for deletion
at kafka.admin.AdminUtils$.deleteTopic(AdminUtils.scala:324)
at 
kafka.admin.DeleteTopicTest.testPartitionReassignmentDuringDeleteTopic(DeleteTopicTest.scala:117)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:377)
at 
org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
at 
org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(StoppableExecutorImpl.java:40)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{code}

  was:
A couple of test cases have timing related transient test failures:

kafka.admin.DeleteTopicTest > testPartitionReassignmentDuringDeleteTopic FAILED
junit.framework.AssertionFailedError: Admin path /admin/delete_topic/test 
path not deleted even after a replica is restarted
at junit.framework.Assert.fail(Assert.java:47)
at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:578)
at 
kafka.admin.DeleteTopicTest.verifyTopicDeletion(DeleteTopicTest.scala:333)
at 

[jira] [Resolved] (KAFKA-2543) facing test failure while building apache-kafka from source

2017-01-11 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-2543.
--
Resolution: Cannot Reproduce

> facing test failure while building apache-kafka from source
> ---
>
> Key: KAFKA-2543
> URL: https://issues.apache.org/jira/browse/KAFKA-2543
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.8.2.1
> Environment: Linux ppc 64 le
>Reporter: naresh gundu
> Fix For: 0.8.1.2
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> I have run below steps from github https://github.com/apache/kafka to 
> building apache kafka branch 0.8.1.2
>  cd source-code
>  gradle
>  ./gradlew jar
>  ./gradlew srcJar
>  ./gradlew test 
> error :
> org.apache.kafka.common.record.MemoryRecordsTest > testIterator[2] FAILED
>  org.apache.kafka.common.KafkaException: 
> java.lang.reflect.InvocationTargetException
>  at 
> org.apache.kafka.common.record.Compressor.wrapForOutput(Compressor.java:217)
>  at org.apache.kafka.common.record.Compressor.(Compressor.java:73)
>  at org.apache.kafka.common.record.Compressor.(Compressor.java:77)
>  at org.apache.kafka.common.record.MemoryRecords.(MemoryRecords.java:43)
>  at 
> org.apache.kafka.common.record.MemoryRecords.emptyRecords(MemoryRecords.java:51)
>  at 
> org.apache.kafka.common.record.MemoryRecords.emptyRecords(MemoryRecords.java:55)
>  at 
> org.apache.kafka.common.record.MemoryRecordsTest.testIterator(MemoryRecordsTest.java:42)
> Caused by:
>  java.lang.reflect.InvocationTargetException
>  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>  at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>  at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>  at 
> org.apache.kafka.common.record.Compressor.wrapForOutput(Compressor.java:213)
>  ... 6 more
> Caused by:
>  java.lang.UnsatisfiedLinkError: 
> /tmp/snappy-unknown-fe798961-3b66-41f3-808a-68ebd27cc82d-libsnappyjava.so: 
> /tmp/snappy-u nknown-fe798961-3b66-41f3-808a-68ebd27cc82d-libsnappyjava.so: 
> cannot open shared object file: No such file or directory (Possible ca use: 
> endianness mismatch)
>  at java.lang.ClassLoader$NativeLibrary.load(Native Method)
>  at java.lang.ClassLoader.loadLibrary1(ClassLoader.java:1965)
>  at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1890)
>  at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1851)
>  at java.lang.Runtime.load0(Runtime.java:795)
>  at java.lang.System.load(System.java:1062)
>  at org.xerial.snappy.SnappyLoader.loadNativeLibrary(SnappyLoader.java:166)
>  at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:145)
>  at org.xerial.snappy.Snappy.(Snappy.java:47)
>  at org.xerial.snappy.SnappyOutputStream.(SnappyOutputStream.java:90)
>  at org.xerial.snappy.SnappyOutputStream.(SnappyOutputStream.java:83)
>  ... 11 more
> 267 tests completed, 1 failed
>  :clients:test FAILED
> please help me fix the failure test case



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4604) Gradle Eclipse plugin creates projects for non project subfolders

2017-01-11 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15820176#comment-15820176
 ] 

Guozhang Wang commented on KAFKA-4604:
--

PR link:https://github.com/apache/kafka/pull/2324

> Gradle Eclipse plugin creates projects for non project subfolders
> -
>
> Key: KAFKA-4604
> URL: https://issues.apache.org/jira/browse/KAFKA-4604
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.11.0.0
> Environment: Tried with Gradle 3.2.1 and Eclipse Neon
>Reporter: Dhwani Katagade
>Assignee: Dhwani Katagade
>Priority: Minor
>  Labels: build, easyfix, patch
>
> Running the command *./gradlew eclipse* generates .project and .classpath 
> files for all projects. But it also generates these files for the root 
> project folder and the connect subfolder that holds the 4 connector projects 
> even though these folders are not actual project folders.
> The unnecessary connect project is benign, but the unnecessary kafka project 
> created for the root folder has a side effect. The root folder has a bin 
> directory that holds some scripts. When a _Clean all projects_ is done in 
> Eclipse, it cleans up the scripts in the bin directory. These have to be 
> restored by running *git checkout \-\- *. This same could become a 
> problem for the connect project as well if tomorrow we place some files under 
> connect/bin.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4114) Allow for different "auto.offset.reset" strategies for different input streams

2017-01-11 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-4114:
-
   Resolution: Fixed
Fix Version/s: 0.10.2.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 2007
[https://github.com/apache/kafka/pull/2007]

> Allow for different "auto.offset.reset" strategies for different input streams
> --
>
> Key: KAFKA-4114
> URL: https://issues.apache.org/jira/browse/KAFKA-4114
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Bill Bejeck
> Fix For: 0.10.2.0
>
>
> Today we only have one consumer config "offset.auto.reset" to control that 
> behavior, which means all streams are read either from "earliest" or "latest".
> However, it would be useful to improve this settings to allow users have 
> finer control over different input stream. For example, with two input 
> streams, one of them always reading from offset 0 upon (re)-starting, and the 
> other reading for log end offset.
> This JIRA requires to extend {{KStreamBuilder}} API for methods 
> {{.stream(...)}} and {{.table(...)}} to add a new parameter that indicate the 
> initial offset to be used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3875) Transient test failure: kafka.api.SslProducerSendTest.testSendNonCompressedMessageWithCreateTime

2017-01-11 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15819602#comment-15819602
 ] 

Guozhang Wang commented on KAFKA-3875:
--

Another failure with different error message:

{code}
Stacktrace

java.lang.AssertionError: No request is complete.
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
kafka.api.BaseProducerSendTest$$anonfun$testCloseWithZeroTimeoutFromCallerThread$1.apply$mcVI$sp(BaseProducerSendTest.scala:398)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:160)
at 
kafka.api.BaseProducerSendTest.testCloseWithZeroTimeoutFromCallerThread(BaseProducerSendTest.scala:395)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:377)
at 
org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
at 
org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(StoppableExecutorImpl.java:40)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{code}

> Transient test failure: 
> kafka.api.SslProducerSendTest.testSendNonCompressedMessageWithCreateTime
> 
>
> Key: KAFKA-3875
> URL: https://issues.apache.org/jira/browse/KAFKA-3875
>   

[jira] [Updated] (KAFKA-4588) QueryableStateIntegrationTest.shouldNotMakeStoreAvailableUntilAllStoresAvailable is occasionally failing on jenkins

2017-01-11 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-4588:
-
Issue Type: Sub-task  (was: Bug)
Parent: KAFKA-2054

> QueryableStateIntegrationTest.shouldNotMakeStoreAvailableUntilAllStoresAvailable
>  is occasionally failing on jenkins
> ---
>
> Key: KAFKA-4588
> URL: https://issues.apache.org/jira/browse/KAFKA-4588
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Damian Guy
>Assignee: Damian Guy
> Fix For: 0.10.2.0
>
>
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
> shouldNotMakeStoreAvailableUntilAllStoresAvailable[1] FAILED
> java.lang.AssertionError: Condition not met within timeout 3. waiting 
> for store count-by-key
> at 
> org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:259)
> at 
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest.shouldNotMakeStoreAvailableUntilAllStoresAvailable(QueryableStateIntegrationTest.java:502)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4222) Transient failure in QueryableStateIntegrationTest.queryOnRebalance

2017-01-11 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-4222:
-
Issue Type: Sub-task  (was: Bug)
Parent: KAFKA-2054

> Transient failure in QueryableStateIntegrationTest.queryOnRebalance
> ---
>
> Key: KAFKA-4222
> URL: https://issues.apache.org/jira/browse/KAFKA-4222
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams, unit tests
>Reporter: Jason Gustafson
>Assignee: Matthias J. Sax
> Fix For: 0.10.1.0
>
>
> Seen here: https://builds.apache.org/job/kafka-trunk-jdk8/915/console
> {code}
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
> queryOnRebalance[1] FAILED
> java.lang.AssertionError: Condition not met within timeout 3. waiting 
> for metadata, store and value to be non null
> at 
> org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:276)
> at 
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest.verifyAllKVKeys(QueryableStateIntegrationTest.java:263)
> at 
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest.queryOnRebalance(QueryableStateIntegrationTest.java:342)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3875) Transient test failure: kafka.api.SslProducerSendTest.testSendNonCompressedMessageWithCreateTime

2017-01-11 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15819596#comment-15819596
 ] 

Guozhang Wang commented on KAFKA-3875:
--

Another failure case of this: 
https://builds.apache.org/job/kafka-pr-jdk8-scala2.12/758/testReport/junit/kafka.api/SslProducerSendTest/testSendCompressedMessageWithCreateTime/

{code}
Stacktrace

java.lang.AssertionError: Should have offset 100 but only successfully sent 0 
expected:<100> but was:<0>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at 
kafka.api.BaseProducerSendTest.sendAndVerifyTimestamp(BaseProducerSendTest.scala:219)
at 
kafka.api.BaseProducerSendTest.testSendCompressedMessageWithCreateTime(BaseProducerSendTest.scala:169)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:377)
at 
org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
at 
org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(StoppableExecutorImpl.java:40)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{code}

Seems this is not resolved yet.

> Transient test failure: 
> kafka.api.SslProducerSendTest.testSendNonCompressedMessageWithCreateTime
> 

[jira] [Updated] (KAFKA-3896) Unstable test KStreamRepartitionJoinTest.shouldCorrectlyRepartitionOnJoinOperations

2017-01-11 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-3896:
-
Issue Type: Sub-task  (was: Bug)
Parent: KAFKA-2054

> Unstable test 
> KStreamRepartitionJoinTest.shouldCorrectlyRepartitionOnJoinOperations
> ---
>
> Key: KAFKA-3896
> URL: https://issues.apache.org/jira/browse/KAFKA-3896
> Project: Kafka
>  Issue Type: Sub-task
>  Components: unit tests
>Reporter: Ashish K Singh
>Assignee: Damian Guy
> Fix For: 0.10.1.0
>
>
> {{KStreamRepartitionJoinTest.shouldCorrectlyRepartitionOnJoinOperations}} 
> seems to be unstable. A failure can be found 
> [here|https://builds.apache.org/job/kafka-trunk-git-pr-jdk7/4363/]. Could not 
> reproduce the test failure locally though.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (KAFKA-3896) Unstable test KStreamRepartitionJoinTest.shouldCorrectlyRepartitionOnJoinOperations

2017-01-11 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang reopened KAFKA-3896:
--

> Unstable test 
> KStreamRepartitionJoinTest.shouldCorrectlyRepartitionOnJoinOperations
> ---
>
> Key: KAFKA-3896
> URL: https://issues.apache.org/jira/browse/KAFKA-3896
> Project: Kafka
>  Issue Type: Bug
>  Components: unit tests
>Reporter: Ashish K Singh
>Assignee: Damian Guy
> Fix For: 0.10.1.0
>
>
> {{KStreamRepartitionJoinTest.shouldCorrectlyRepartitionOnJoinOperations}} 
> seems to be unstable. A failure can be found 
> [here|https://builds.apache.org/job/kafka-trunk-git-pr-jdk7/4363/]. Could not 
> reproduce the test failure locally though.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3896) Unstable test KStreamRepartitionJoinTest.shouldCorrectlyRepartitionOnJoinOperations

2017-01-11 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15819555#comment-15819555
 ] 

Guozhang Wang commented on KAFKA-3896:
--

{code}
java.lang.AssertionError: Condition not met within timeout 6. Did not 
receive 5 number of records
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:259)
at 
org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinValuesRecordsReceived(IntegrationTestUtils.java:253)
at 
org.apache.kafka.streams.integration.KStreamRepartitionJoinTest.receiveMessages(KStreamRepartitionJoinTest.java:401)
at 
org.apache.kafka.streams.integration.KStreamRepartitionJoinTest.verifyCorrectOutput(KStreamRepartitionJoinTest.java:311)
at 
org.apache.kafka.streams.integration.KStreamRepartitionJoinTest.shouldCorrectlyRepartitionOnJoinOperations(KStreamRepartitionJoinTest.java:145)
{code}

See another occurrence of this failure, re-opening.

> Unstable test 
> KStreamRepartitionJoinTest.shouldCorrectlyRepartitionOnJoinOperations
> ---
>
> Key: KAFKA-3896
> URL: https://issues.apache.org/jira/browse/KAFKA-3896
> Project: Kafka
>  Issue Type: Bug
>  Components: unit tests
>Reporter: Ashish K Singh
>Assignee: Damian Guy
> Fix For: 0.10.1.0
>
>
> {{KStreamRepartitionJoinTest.shouldCorrectlyRepartitionOnJoinOperations}} 
> seems to be unstable. A failure can be found 
> [here|https://builds.apache.org/job/kafka-trunk-git-pr-jdk7/4363/]. Could not 
> reproduce the test failure locally though.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3715) Higher granularity streams metrics

2017-01-11 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-3715.
--
Resolution: Fixed

> Higher granularity streams metrics 
> ---
>
> Key: KAFKA-3715
> URL: https://issues.apache.org/jira/browse/KAFKA-3715
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Jeff Klukas
>Assignee: aarti gupta
>  Labels: api
> Fix For: 0.10.2.0
>
>
> Originally proposed by [~guozhang] in 
> https://github.com/apache/kafka/pull/1362#issuecomment-218326690
> We can consider adding metrics for process / punctuate / commit rate at the 
> granularity of each processor node in addition to the global rate mentioned 
> above. This is very helpful in debugging.
> We can consider adding rate / total cumulated metrics for context.forward 
> indicating how many records were forwarded downstream from this processor 
> node as well. This is helpful in debugging.
> We can consider adding metrics for each stream partition's timestamp. This is 
> helpful in debugging.
> Besides the latency metrics, we can also add throughput latency in terms of 
> source records consumed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4060) Remove ZkClient dependency in Kafka Streams

2017-01-11 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-4060:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 1884
[https://github.com/apache/kafka/pull/1884]

> Remove ZkClient dependency in Kafka Streams
> ---
>
> Key: KAFKA-4060
> URL: https://issues.apache.org/jira/browse/KAFKA-4060
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Hojjat Jafarpour
>  Labels: kip
> Fix For: 0.10.2.0
>
>
> In Kafka Streams we need to dynamically create or update those internal 
> topics (i.e. repartition topics) upon rebalance, inside 
> {{InternalTopicManager}} which is triggered by {{StreamPartitionAssignor}}. 
> Currently we are using {{ZkClient}} to talk to ZK directly for such actions.
> With create and delete topics request merged in by [~granthenke] as part of 
> KIP-4, we should now be able to remove the ZkClient dependency and directly 
> use these requests.
> Related: 
> 1. KIP-4. 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-4+-+Command+line+and+centralized+administrative+operations
> 2. Consumer Reblance Protocol. 
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Client-side+Assignment+Proposal



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4481) Relax Kafka Streams API type constraints

2017-01-11 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-4481:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 2205
[https://github.com/apache/kafka/pull/2205]

> Relax Kafka Streams API type constraints
> 
>
> Key: KAFKA-4481
> URL: https://issues.apache.org/jira/browse/KAFKA-4481
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Xavier Léauté
>Assignee: Xavier Léauté
>  Labels: kip, usability
> Fix For: 0.10.2.0
>
>
> Streams API methods that apply transformations to streams are currently 
> invariant in the key and value types, when they should probably be 
> contravariant in those types.
> For instance, {{KStream.filter(Predicate predicate)}} should be 
> {{KStream.filter(Predicate predicate)}} to accept 
> predicates that can act on any supertype of K, or V.
> Same thing applies to method that take {{Aggregator}}, {{StreamPartitioner}}, 
> {{KeyValueMapper}}, {{ValueMapper}}, {{ProcessorSupplier}}, {{ValueJoiner}}, 
> etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4604) Gradle Eclipse plugin creates projects for non project subfolders

2017-01-10 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-4604:
-
Assignee: Dhwani Katagade

> Gradle Eclipse plugin creates projects for non project subfolders
> -
>
> Key: KAFKA-4604
> URL: https://issues.apache.org/jira/browse/KAFKA-4604
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.11.0.0
> Environment: Tried with Gradle 3.2.1 and Eclipse Neon
>Reporter: Dhwani Katagade
>Assignee: Dhwani Katagade
>Priority: Minor
>  Labels: build, easyfix, patch
>
> Running the command *./gradlew eclipse* generates .project and .classpath 
> files for all projects. But it also generates these files for the root 
> project folder and the connect subfolder that holds the 4 connector projects 
> even though these folders are not actual project folders.
> The unnecessary connect project is benign, but the unnecessary kafka project 
> created for the root folder has a side effect. The root folder has a bin 
> directory that holds some scripts. When a _Clean all projects_ is done in 
> Eclipse, it cleans up the scripts in the bin directory. These have to be 
> restored by running *git checkout \-\- *. This same could become a 
> problem for the connect project as well if tomorrow we place some files under 
> connect/bin.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4604) Gradle Eclipse plugin creates projects for non project subfolders

2017-01-10 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15816000#comment-15816000
 ] 

Guozhang Wang commented on KAFKA-4604:
--

BTW I have just assigned it to you.

> Gradle Eclipse plugin creates projects for non project subfolders
> -
>
> Key: KAFKA-4604
> URL: https://issues.apache.org/jira/browse/KAFKA-4604
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.11.0.0
> Environment: Tried with Gradle 3.2.1 and Eclipse Neon
>Reporter: Dhwani Katagade
>Priority: Minor
>  Labels: build, easyfix, patch
>
> Running the command *./gradlew eclipse* generates .project and .classpath 
> files for all projects. But it also generates these files for the root 
> project folder and the connect subfolder that holds the 4 connector projects 
> even though these folders are not actual project folders.
> The unnecessary connect project is benign, but the unnecessary kafka project 
> created for the root folder has a side effect. The root folder has a bin 
> directory that holds some scripts. When a _Clean all projects_ is done in 
> Eclipse, it cleans up the scripts in the bin directory. These have to be 
> restored by running *git checkout \-\- *. This same could become a 
> problem for the connect project as well if tomorrow we place some files under 
> connect/bin.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4604) Gradle Eclipse plugin creates projects for non project subfolders

2017-01-10 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15815872#comment-15815872
 ] 

Guozhang Wang commented on KAFKA-4604:
--

Just curious is it only an issue for `ecplise` or it is also for `idea` 
intellij as well?

> Gradle Eclipse plugin creates projects for non project subfolders
> -
>
> Key: KAFKA-4604
> URL: https://issues.apache.org/jira/browse/KAFKA-4604
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.11.0.0
> Environment: Tried with Gradle 3.2.1 and Eclipse Neon
>Reporter: Dhwani Katagade
>Priority: Minor
>  Labels: build, easyfix, patch
>
> Running the command *./gradlew eclipse* generates .project and .classpath 
> files for all projects. But it also generates these files for the root 
> project folder and the connect subfolder that holds the 4 connector projects 
> even though these folders are not actual project folders.
> The unnecessary connect project is benign, but the unnecessary kafka project 
> created for the root folder has a side effect. The root folder has a bin 
> directory that holds some scripts. When a _Clean all projects_ is done in 
> Eclipse, it cleans up the scripts in the bin directory. These have to be 
> restored by running *git checkout \-\- *. This same could become a 
> problem for the connect project as well if tomorrow we place some files under 
> connect/bin.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4609) KTable/KTable join followed by groupBy and aggregate/count can result in incorrect results

2017-01-10 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-4609:
-
Labels: architecture  (was: )

> KTable/KTable join followed by groupBy and aggregate/count can result in 
> incorrect results
> --
>
> Key: KAFKA-4609
> URL: https://issues.apache.org/jira/browse/KAFKA-4609
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.1.1, 0.10.2.0
>Reporter: Damian Guy
>Assignee: Damian Guy
>  Labels: architecture
>
> When caching is enabled, KTable/KTable joins can result in duplicate values 
> being emitted. This will occur if there were updates to the same key in both 
> tables. Each table is flushed independently, and each table will trigger the 
> join, so you get two results for the same key. 
> If we subsequently perform a groupBy and then aggregate operation we will now 
> process these duplicates resulting in incorrect aggregated values. For 
> example count will be double the value it should be.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4609) KTable/KTable join followed by groupBy and aggregate/count can result in incorrect results

2017-01-10 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-4609:
-
Component/s: streams

> KTable/KTable join followed by groupBy and aggregate/count can result in 
> incorrect results
> --
>
> Key: KAFKA-4609
> URL: https://issues.apache.org/jira/browse/KAFKA-4609
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.1.1, 0.10.2.0
>Reporter: Damian Guy
>Assignee: Damian Guy
>
> When caching is enabled, KTable/KTable joins can result in duplicate values 
> being emitted. This will occur if there were updates to the same key in both 
> tables. Each table is flushed independently, and each table will trigger the 
> join, so you get two results for the same key. 
> If we subsequently perform a groupBy and then aggregate operation we will now 
> process these duplicates resulting in incorrect aggregated values. For 
> example count will be double the value it should be.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4603) the argument of shell in doc wrong and command parsed error

2017-01-09 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-4603:
-
Assignee: Xin

> the argument of shell in doc wrong and command parsed error
> ---
>
> Key: KAFKA-4603
> URL: https://issues.apache.org/jira/browse/KAFKA-4603
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, documentation
>Affects Versions: 0.10.0.1, 0.10.2.0
> Environment: suse
>Reporter: Xin
>Assignee: Xin
>Priority: Minor
>
> according to the 7.6.2 Migrating clusters of document :
> ./zookeeper-security-migration.sh --zookeeper.acl=secure 
> --zookeeper.connection=localhost:2181
> joptsimple.OptionArgumentConversionException: Cannot parse argument 
> 'localhost:2181' of option zookeeper.connection.timeout
>   at joptsimple.AbstractOptionSpec.convertWith(AbstractOptionSpec.java:93)
>   at 
> joptsimple.ArgumentAcceptingOptionSpec.convert(ArgumentAcceptingOptionSpec.java:274)
>   at joptsimple.OptionSet.valuesOf(OptionSet.java:223)
>   at joptsimple.OptionSet.valueOf(OptionSet.java:172)
>   at kafka.admin.ZkSecurityMigrator$.run(ZkSecurityMigrator.scala:111)
>   at kafka.admin.ZkSecurityMigrator$.main(ZkSecurityMigrator.scala:119)
>   at kafka.admin.ZkSecurityMigrator.main(ZkSecurityMigrator.scala)
> Caused by: joptsimple.internal.ReflectionException: 
> java.lang.NumberFormatException: For input string: "localhost:2181"
>   at 
> joptsimple.internal.Reflection.reflectionException(Reflection.java:140)
>   at joptsimple.internal.Reflection.invoke(Reflection.java:122)
>   at 
> joptsimple.internal.MethodInvokingValueConverter.convert(MethodInvokingValueConverter.java:48)
>   at joptsimple.internal.Reflection.convertWith(Reflection.java:128)
>   at joptsimple.AbstractOptionSpec.convertWith(AbstractOptionSpec.java:90)
>   ... 6 more
> Caused by: java.lang.NumberFormatException: For input string: "localhost:2181"
>   at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
>   at java.lang.Integer.parseInt(Integer.java:492)
>   at java.lang.Integer.valueOf(Integer.java:582)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at joptsimple.internal.Reflection.invoke(Reflection.java:119)
>   ... 9 more
> ===>the argument  "zookeeper.connection" has been parsed to 
> "zookeeper.connection.timeout"
> using help i found that  the argument  is :
> --zookeeper.connectSets the ZooKeeper connect string 
>  (ensemble). This parameter takes a  
>  comma-separated list of host:port   
>  pairs. (default: localhost:2181)
> --zookeeper.connection.timeout Sets the ZooKeeper connection timeout.
> the document describe wrong, and the code also has something wrong:
>  in ZkSecurityMigrator.scala,
>   val parser = new OptionParse()==>
> Any of --v, --ve, ... are accepted on the command line and treated as though 
> you had typed --verbose.
> To suppress this behavior, use the OptionParser constructor 
> OptionParser(boolean allowAbbreviations) and pass a value of false.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4125) Provide low-level Processor API meta data in DSL layer

2017-01-09 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15813181#comment-15813181
 ] 

Guozhang Wang commented on KAFKA-4125:
--

[~bbejeck] Could you lead a KIP proposal / discussion on this change, since it 
proposes to add some new APIs? A recent example can be seen here: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-100+-+Relax+Type+constraints+in+Kafka+Streams+API

Since this is a straight-forward addition I won't expect it to incur much 
overhead.

> Provide low-level Processor API meta data in DSL layer
> --
>
> Key: KAFKA-4125
> URL: https://issues.apache.org/jira/browse/KAFKA-4125
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Guozhang Wang
>Priority: Minor
>
> For Processor API, user can get meta data like record offset, timestamp etc 
> via the provided {{Context}} object. It might be useful to allow uses to 
> access this information in DSL layer, too.
> The idea would be, to do it "the Flink way", ie, by providing
> RichFunctions; {{mapValue()}} for example.
> Is takes a {{ValueMapper}} that only has method
> {noformat}
> V2 apply(V1 value);
> {noformat}
> Thus, you cannot get any meta data within apply (it's completely "blind").
> We would add two more interfaces: {{RichFunction}} with a method
> {{open(Context context)}} and
> {noformat}
> RichValueMapper extends ValueMapper, RichFunction
> {noformat}
> This way, the user can chose to implement Rich- or Standard-function and
> we do not need to change existing APIs. Both can be handed into
> {{KStream.mapValues()}} for example. Internally, we check if a Rich
> function is provided, and if yes, hand in the {{Context}} object once, to
> make it available to the user who can now access it within {{apply()}} -- or
> course, the user must set a member variable in {{open()}} to hold the
> reference to the Context object.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-4551) StreamsSmokeTest.test_streams intermittent failure

2017-01-09 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-4551.
--
   Resolution: Fixed
Fix Version/s: (was: 0.10.1.2)
   0.10.2.0

Issue resolved by pull request 2319
[https://github.com/apache/kafka/pull/2319]

> StreamsSmokeTest.test_streams intermittent failure
> --
>
> Key: KAFKA-4551
> URL: https://issues.apache.org/jira/browse/KAFKA-4551
> Project: Kafka
>  Issue Type: Bug
>Reporter: Roger Hoover
>Assignee: Damian Guy
>Priority: Blocker
>  Labels: system-test-failure
> Fix For: 0.10.2.0
>
>
> {code}
> test_id:
> kafkatest.tests.streams.streams_smoke_test.StreamsSmokeTest.test_streams
> status: FAIL
> run time:   4 minutes 44.872 seconds
> 
> Traceback (most recent call last):
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.6.0-py2.7.egg/ducktape/tests/runner_client.py",
>  line 123, in run
> data = self.run_test()
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.6.0-py2.7.egg/ducktape/tests/runner_client.py",
>  line 176, in run_test
> return self.test_context.function(self.test)
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka/kafka/tests/kafkatest/tests/streams/streams_smoke_test.py",
>  line 78, in test_streams
> node.account.ssh("grep SUCCESS %s" % self.driver.STDOUT_FILE, 
> allow_fail=False)
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.6.0-py2.7.egg/ducktape/cluster/remoteaccount.py",
>  line 253, in ssh
> raise RemoteCommandError(self, cmd, exit_status, stderr.read())
> RemoteCommandError: ubuntu@worker6: Command 'grep SUCCESS 
> /mnt/streams/streams.stdout' returned non-zero exit status 1.
> {code}
> http://confluent-kafka-system-test-results.s3-us-west-2.amazonaws.com/2016-12-15--001.1481794587--apache--trunk--7049938/StreamsSmokeTest/test_streams/91.tgz



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-3502) Build is killed during kafka streams tests due to `pure virtual method called` error

2017-01-06 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang reassigned KAFKA-3502:


Assignee: Guozhang Wang

> Build is killed during kafka streams tests due to `pure virtual method 
> called` error
> 
>
> Key: KAFKA-3502
> URL: https://issues.apache.org/jira/browse/KAFKA-3502
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ashish K Singh
>Assignee: Guozhang Wang
>  Labels: transient-unit-test-failure
>
> Build failed due to failure in streams' test. Not clear which test led to 
> this.
> Jenkins console: 
> https://builds.apache.org/job/kafka-trunk-git-pr-jdk7/3210/console
> {code}
> org.apache.kafka.streams.kstream.internals.KTableFilterTest > testValueGetter 
> PASSED
> org.apache.kafka.streams.kstream.internals.KStreamFlatMapTest > testFlatMap 
> PASSED
> org.apache.kafka.streams.kstream.internals.KTableAggregateTest > testAggBasic 
> PASSED
> org.apache.kafka.streams.kstream.internals.KStreamFlatMapValuesTest > 
> testFlatMapValues PASSED
> org.apache.kafka.streams.kstream.KStreamBuilderTest > testMerge PASSED
> org.apache.kafka.streams.kstream.KStreamBuilderTest > testFrom PASSED
> org.apache.kafka.streams.kstream.KStreamBuilderTest > testNewName PASSED
> pure virtual method called
> terminate called without an active exception
> :streams:test FAILED
> FAILURE: Build failed with an exception.
> * What went wrong:
> Execution failed for task ':streams:test'.
> > Process 'Gradle Test Executor 4' finished with non-zero exit value 134
> {code}
> Tried reproducing the issue locally, but could not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4607) Kafka Streams allows you to provide strings with illegal characters for internal topic names

2017-01-06 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-4607:
-
Assignee: Nikki Thean

> Kafka Streams allows you to provide strings with illegal characters for 
> internal topic names
> 
>
> Key: KAFKA-4607
> URL: https://issues.apache.org/jira/browse/KAFKA-4607
> Project: Kafka
>  Issue Type: Bug
>Reporter: Nikki Thean
>Assignee: Nikki Thean
>Priority: Minor
>
> When using the aggregate function of the Kafka Streams DSL, I supplied the 
> function with a name for the underlying state store that contained 
> whitespace. The stream application ran with no error message but seemed to be 
> hanging. 
> After a lot of troubleshooting, I looked in the logs on the Kafka cluster and 
> noticed that it was complaining about illegal characters in the log line. It 
> turns out that an internal changelog topic had been created that contained 
> whitespace and this was causing both Kafka and ZK to choke. I was unable to 
> use the delete-topic command line utility as it also threw the illegal 
> character error. In the end I had to manually delete the topics out of ZK 
> (using an open source ZK tool because the ZK delete command was choking on 
> any strings with spaces) and do some stuff with the offset logs.
> I am not well versed in the Kafka or Kafka Streams code, but from looking 
> through it, I notice that 
> [InternalTopicManager|https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/processor/internals/InternalTopicManager.java#L170]
>  does not seem to validate the names of auto-generated internal topics beyond 
> [checking for null topic 
> names|https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/processor/internals/InternalTopicConfig.java#L38]
>  rather than using Topic.validate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-1908) Split brain

2017-01-06 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-1908.
--
Resolution: Not A Problem

> Split brain
> ---
>
> Key: KAFKA-1908
> URL: https://issues.apache.org/jira/browse/KAFKA-1908
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.0
>Reporter: Alexey Ozeritskiy
>
> In some cases, there may be two leaders for one partition.
> Steps to reproduce:
> # We have 3 brokers, 1 partition with 3 replicas:
> {code}
> TopicAndPartition: [partition,0]Leader: 1   Replicas: [2,1,3]   
> ISR: [1,2,3]
> {code} 
> # controller works on broker 3
> # let the kafka port be 9092. We execute on broker 1:
> {code}
> iptables -A INPUT -p tcp --dport 9092 -j REJECT
> {code}
> # Initiate replica election
> # As a result:
> Broker 1:
> {code}
> TopicAndPartition: [partition,0]Leader: 1   Replicas: [2,1,3]   
> ISR: [1,2,3]
> {code}
> Broker 2:
> {code}
> TopicAndPartition: [partition,0]Leader: 2   Replicas: [2,1,3]   
> ISR: [1,2,3]
> {code}
> # Flush the iptables rules on broker 1
> Now we can produce messages to {code}[partition,0]{code}. Replica-1 will not 
> receive new data. A consumer can read data from replica-1 or replica-2. When 
> it reads from replica-1 it resets the offsets and than can read duplicates 
> from replica-2.
> We saw this situation in our production cluster when it had network problems.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3452) Support session windows

2017-01-06 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-3452:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 2166
[https://github.com/apache/kafka/pull/2166]

> Support session windows
> ---
>
> Key: KAFKA-3452
> URL: https://issues.apache.org/jira/browse/KAFKA-3452
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Damian Guy
>  Labels: api, kip
> Fix For: 0.10.2.0
>
>
> The Streams DSL currently does not provide session window as in the DataFlow 
> model. We have seen some common use cases for this feature and it's better 
> adding this support asap.
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-94+Session+Windows



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4601) Avoid duplicated repartitioning in KStream DSL

2017-01-05 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15802904#comment-15802904
 ] 

Guozhang Wang commented on KAFKA-4601:
--

[~bbejeck] Note that it may be more complex than it sounds: today we translate 
the DSL operators to underlying processor nodes in topology independently (i.e. 
one at a time), so when we are translating a join after an aggregate, we do not 
know what processors have been created so far. To solve this specific issue we 
can make some workaround, but a general solution would be extending the 
translation mechanism to be "global": this can be as complex as query 
optimization.

> Avoid duplicated repartitioning in KStream DSL
> --
>
> Key: KAFKA-4601
> URL: https://issues.apache.org/jira/browse/KAFKA-4601
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Bill Bejeck
>  Labels: performance
>
> Consider the following DSL:
> {code}
> Stream source = builder.stream(Serdes.String(), 
> Serdes.String(), "topic1").map(..);
> KTable counts = source
> .groupByKey()
> .count("Counts");
> KStream sink = source.leftJoin(counts, ..);
> {code}
> The resulted topology looks like this:
> {code}
> ProcessorTopology:
>   KSTREAM-SOURCE-00:
>   topics: [topic1]
>   children:   [KSTREAM-MAP-01]
>   KSTREAM-MAP-01:
>   children:   
> [KSTREAM-FILTER-04, KSTREAM-FILTER-07]
>   KSTREAM-FILTER-04:
>   children:   
> [KSTREAM-SINK-03]
>   KSTREAM-SINK-03:
>   topic:  X-Counts-repartition
>   KSTREAM-FILTER-07:
>   children:   
> [KSTREAM-SINK-06]
>   KSTREAM-SINK-06:
>   topic:  
> X-KSTREAM-MAP-01-repartition
> ProcessorTopology:
>   KSTREAM-SOURCE-08:
>   topics: 
> [X-KSTREAM-MAP-01-repartition]
>   children:   
> [KSTREAM-LEFTJOIN-09]
>   KSTREAM-LEFTJOIN-09:
>   states: [Counts]
>   KSTREAM-SOURCE-05:
>   topics: [X-Counts-repartition]
>   children:   
> [KSTREAM-AGGREGATE-02]
>   KSTREAM-AGGREGATE-02:
>   states: [Counts]
> {code}
> I.e. there are two repartition topics, one for the aggregate and one for the 
> join, which not only introduce unnecessary overheads but also mess up the 
> processing ordering (users are expecting each record to go through 
> aggregation first then the join operator). And in order to get the following 
> simpler topology users today need to add a {{through}} operator after {{map}} 
> manually to enforce repartitioning.
> {code}
> ProcessorTopology:
>   KSTREAM-SOURCE-00:
>   topics: [topic1]
>   children:   [KSTREAM-MAP-01]
>   KSTREAM-MAP-01:
>   children:   
> [KSTREAM-SINK-02]
>   KSTREAM-SINK-02:
>   topic:  topic 2
> ProcessorTopology:
>   KSTREAM-SOURCE-03:
>   topics: [topic 2]
>   children:   
> [KSTREAM-AGGREGATE-04, KSTREAM-LEFTJOIN-05]
>   KSTREAM-AGGREGATE-04:
>   states: [Counts]
>   KSTREAM-LEFTJOIN-05:
>   states: [Counts]
> {code} 
> This kind of optimization should be automatic in Streams, which we can 
> consider doing when extending from one-operator-at-a-time translation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4601) Avoid duplicated repartitioning in KStream DSL

2017-01-05 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-4601:


 Summary: Avoid duplicated repartitioning in KStream DSL
 Key: KAFKA-4601
 URL: https://issues.apache.org/jira/browse/KAFKA-4601
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: Guozhang Wang


Consider the following DSL:

{code}
Stream source = builder.stream(Serdes.String(), 
Serdes.String(), "topic1").map(..);

KTable counts = source
.groupByKey()
.count("Counts");

KStream sink = source.leftJoin(counts, ..);
{code}

The resulted topology looks like this:

{code}
ProcessorTopology:
KSTREAM-SOURCE-00:
topics: [topic1]
children:   [KSTREAM-MAP-01]
KSTREAM-MAP-01:
children:   
[KSTREAM-FILTER-04, KSTREAM-FILTER-07]
KSTREAM-FILTER-04:
children:   
[KSTREAM-SINK-03]
KSTREAM-SINK-03:
topic:  X-Counts-repartition
KSTREAM-FILTER-07:
children:   
[KSTREAM-SINK-06]
KSTREAM-SINK-06:
topic:  
X-KSTREAM-MAP-01-repartition

ProcessorTopology:
KSTREAM-SOURCE-08:
topics: 
[X-KSTREAM-MAP-01-repartition]
children:   
[KSTREAM-LEFTJOIN-09]
KSTREAM-LEFTJOIN-09:
states: [Counts]
KSTREAM-SOURCE-05:
topics: [X-Counts-repartition]
children:   
[KSTREAM-AGGREGATE-02]
KSTREAM-AGGREGATE-02:
states: [Counts]
{code}

I.e. there are two repartition topics, one for the aggregate and one for the 
join, which not only introduce unnecessary overheads but also mess up the 
processing ordering (users are expecting each record to go through aggregation 
first then the join operator). And in order to get the following simpler 
topology users today need to add a {{through}} operator after {{map}} manually 
to enforce repartitioning.

{code}
ProcessorTopology:
KSTREAM-SOURCE-00:
topics: [topic1]
children:   [KSTREAM-MAP-01]
KSTREAM-MAP-01:
children:   
[KSTREAM-SINK-02]
KSTREAM-SINK-02:
topic:  topic 2

ProcessorTopology:
KSTREAM-SOURCE-03:
topics: [topic 2]
children:   
[KSTREAM-AGGREGATE-04, KSTREAM-LEFTJOIN-05]
KSTREAM-AGGREGATE-04:
states: [Counts]
KSTREAM-LEFTJOIN-05:
states: [Counts]
{code} 

This kind of optimization should be automatic in Streams, which we can consider 
doing when extending from one-operator-at-a-time translation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4592) Kafka Producer Metrics Invalid Value

2017-01-05 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15802054#comment-15802054
 ] 

Guozhang Wang commented on KAFKA-4592:
--

@huxi Could you follow the process described here for submitting your PR? In 
this way it will be auto-linked to the JIRA.

https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Code+Changes#ContributingCodeChanges-PullRequest

> Kafka Producer Metrics Invalid Value
> 
>
> Key: KAFKA-4592
> URL: https://issues.apache.org/jira/browse/KAFKA-4592
> Project: Kafka
>  Issue Type: Bug
>  Components: core, producer 
>Affects Versions: 0.10.1.1
>Reporter: AJ Jwair
>Assignee: huxi
>
> Producer metrics
> Metric name: record-size-max
> When no records are produced during the monitoring window, the 
> record-size-max has an invalid value of -9.223372036854776E16
> Please notice that the value is not a very small number close to zero bytes, 
> it is negative 90 quadrillion bytes
> The same behavior was observed in: records-lag-max



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4593) Task migration during rebalance callback process could lead the obsoleted task's IllegalStateException

2017-01-04 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-4593:


 Summary: Task migration during rebalance callback process could 
lead the obsoleted task's IllegalStateException
 Key: KAFKA-4593
 URL: https://issues.apache.org/jira/browse/KAFKA-4593
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: Guozhang Wang


1. Assume 2 running threads A and B, and one task t1 jut for simplicity.
2. First rebalance is triggered, task t1 is assigned to A (B has no assigned 
task).
3. During the first rebalance callback, task t1's state store need to be 
restored on thread A, and this is called in "restoreActiveState" of 
"createStreamTask".
4. Not suppose thread A has a long GC causing it to stall, a second rebalance 
then will be triggered and kicked A out of the group; B gets the task t1 and 
did the same restoration process, after the process thread B continues to 
process data and update the state store, while at the same time writes more 
messages to the changelog (so its log end offset has incremented).

5. After a while A resumes from the long GC, not knowing it has actually be 
kicked out of the group and task t1 is no longer owned to itself, it continues 
the restoration process but then realize that the log end offset has advanced. 
When this happens, we will see the following exception on thread A:

{code}
java.lang.IllegalStateException: task XXX Log end offset of
YYY-table_stream-changelog-ZZ should not change while
restoring: old end offset .., current offset ..

at
org.apache.kafka.streams.processor.internals.ProcessorStateManager.restoreActiveState(ProcessorStateManager.java:248)

at
org.apache.kafka.streams.processor.internals.ProcessorStateManager.register(ProcessorStateManager.java:201)

at
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.register(ProcessorContextImpl.java:122)

at
org.apache.kafka.streams.state.internals.RocksDBWindowStore.init(RocksDBWindowStore.java:200)

at
org.apache.kafka.streams.state.internals.MeteredWindowStore.init(MeteredWindowStore.java:65)

at
org.apache.kafka.streams.state.internals.CachingWindowStore.init(CachingWindowStore.java:65)

at
org.apache.kafka.streams.processor.internals.AbstractTask.initializeStateStores(AbstractTask.java:86)

at
org.apache.kafka.streams.processor.internals.StreamTask.(StreamTask.java:120)

at
org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:794)

at
org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:1222)

at
org.apache.kafka.streams.processor.internals.StreamThread$AbstractTaskCreator.retryWithBackoff(StreamThread.java:1195)

at
org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:897)

at
org.apache.kafka.streams.processor.internals.StreamThread.access$500(StreamThread.java:71)

at
org.apache.kafka.streams.processor.internals.StreamThread$1.onPartitionsAssigned(StreamThread.java:240)

at
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:230)

at
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:314)

at
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:278)

at
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:261)

at
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1039)

at
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1004)

at
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:570)

at
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:359)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4587) Rethink Unification of Caching with Dedupping

2017-01-03 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-4587:


 Summary: Rethink Unification of Caching with Dedupping
 Key: KAFKA-4587
 URL: https://issues.apache.org/jira/browse/KAFKA-4587
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: Guozhang Wang


This is discussed in PR https://github.com/apache/kafka/pull/1588

In order to support user-customizable state store suppliers in the DSL we did 
the following:

1) Introduce a {{TupleForwarder}} to forward tuples from cached stores that is 
wrapping user customized stores.
2) Narrow the scope to only dedup on forwarding if it is the default 
CachingXXStore with wrapper RocksDB. 

With this, the unification of dedupping and caching is less useful now, and we 
are complicating the inner implementations for forwarding a lot. We need to 
re-consider this feature with finer granularity of turning on / off caching per 
store, potentially with explicit triggers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4584) Fail the 'kafka-configs' command if the config to be removed does not exist

2017-01-03 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-4584:
-
   Resolution: Fixed
Fix Version/s: 0.10.2.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 2300
[https://github.com/apache/kafka/pull/2300]

> Fail the 'kafka-configs' command if the config to be removed does not exist
> ---
>
> Key: KAFKA-4584
> URL: https://issues.apache.org/jira/browse/KAFKA-4584
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Vahid Hashemian
>Assignee: Vahid Hashemian
>Priority: Minor
> Fix For: 0.10.2.0
>
>
> With [KAFKA-4480|https://issues.apache.org/jira/browse/KAFKA-4480], the 
> {{kafka-configs}} command was improved to log an error in case some of the 
> user provided configs for removal do not exist. To make this command safer in 
> case of such user errors it was suggested by [~ijuma] to fail the command 
> rather than just logging an error; and have the user retry the command with 
> correct config names.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4222) Transient failure in QueryableStateIntegrationTest.queryOnRebalance

2017-01-03 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15795576#comment-15795576
 ] 

Guozhang Wang commented on KAFKA-4222:
--

Re-opening this issue as we have seen it happen again with a slight different 
error message:

{code}
org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance[1] FAILED java.lang.AssertionError: Condition not met within 
timeout 6. Did not receive 1 number of records
{code}


> Transient failure in QueryableStateIntegrationTest.queryOnRebalance
> ---
>
> Key: KAFKA-4222
> URL: https://issues.apache.org/jira/browse/KAFKA-4222
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Reporter: Jason Gustafson
>Assignee: Eno Thereska
> Fix For: 0.10.1.0
>
>
> Seen here: https://builds.apache.org/job/kafka-trunk-jdk8/915/console
> {code}
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
> queryOnRebalance[1] FAILED
> java.lang.AssertionError: Condition not met within timeout 3. waiting 
> for metadata, store and value to be non null
> at 
> org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:276)
> at 
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest.verifyAllKVKeys(QueryableStateIntegrationTest.java:263)
> at 
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest.queryOnRebalance(QueryableStateIntegrationTest.java:342)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4550) current trunk unstable

2017-01-03 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15795573#comment-15795573
 ] 

Guozhang Wang commented on KAFKA-4550:
--

I'm re-opening https://issues.apache.org/jira/browse/KAFKA-4222 for the second 
issue, and have just merged Ismael's patch for KAFKA-4528.

> current trunk unstable
> --
>
> Key: KAFKA-4550
> URL: https://issues.apache.org/jira/browse/KAFKA-4550
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.10.2.0
>Reporter: radai rosenblatt
> Attachments: run1.log, run2.log, run3.log, run4.log, run5.log
>
>
> on latest trunk (commit hash 908b6d1148df963d21a70aaa73a7a87571b965a9)
> when running the exact same build 5 times, I get:
> 3 failures (on 3 separate runs):
>kafka.api.SslProducerSendTest > testFlush FAILED java.lang.AssertionError: 
> No request is complete
>org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
> queryOnRebalance[1] FAILED java.lang.AssertionError: Condition not met within 
> timeout 6. Did not receive 1 number of records
>kafka.producer.ProducerTest > testAsyncSendCanCorrectlyFailWithTimeout 
> FAILED java.lang.AssertionError: Message set should have 1 message
> 1 success
> 1 stall (build hung)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (KAFKA-4222) Transient failure in QueryableStateIntegrationTest.queryOnRebalance

2017-01-03 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang reopened KAFKA-4222:
--

> Transient failure in QueryableStateIntegrationTest.queryOnRebalance
> ---
>
> Key: KAFKA-4222
> URL: https://issues.apache.org/jira/browse/KAFKA-4222
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Reporter: Jason Gustafson
>Assignee: Eno Thereska
> Fix For: 0.10.1.0
>
>
> Seen here: https://builds.apache.org/job/kafka-trunk-jdk8/915/console
> {code}
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
> queryOnRebalance[1] FAILED
> java.lang.AssertionError: Condition not met within timeout 3. waiting 
> for metadata, store and value to be non null
> at 
> org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:276)
> at 
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest.verifyAllKVKeys(QueryableStateIntegrationTest.java:263)
> at 
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest.queryOnRebalance(QueryableStateIntegrationTest.java:342)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4528) Failure in kafka.producer.ProducerTest.testAsyncSendCanCorrectlyFailWithTimeout

2017-01-03 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-4528:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 2298
[https://github.com/apache/kafka/pull/2298]

> Failure in 
> kafka.producer.ProducerTest.testAsyncSendCanCorrectlyFailWithTimeout
> ---
>
> Key: KAFKA-4528
> URL: https://issues.apache.org/jira/browse/KAFKA-4528
> Project: Kafka
>  Issue Type: Sub-task
>  Components: unit tests
>Reporter: Guozhang Wang
>Assignee: Ismael Juma
>  Labels: newbie
> Fix For: 0.10.2.0
>
>
> I have seen this failure a few times in the past few days, worth 
> investigating.
> Example:
> https://builds.apache.org/job/kafka-pr-jdk8-scala2.12/79/testReport/junit/kafka.producer/ProducerTest/testAsyncSendCanCorrectlyFailWithTimeout/
> {code}
> Stacktrace
> java.lang.AssertionError: Message set should have 1 message
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> kafka.producer.ProducerTest.testAsyncSendCanCorrectlyFailWithTimeout(ProducerTest.scala:313)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
>   at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:377)
>   at 
> org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
> 

[jira] [Updated] (KAFKA-2434) remove roundrobin identical topic constraint in consumer coordinator (old API)

2017-01-03 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-2434:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 145
[https://github.com/apache/kafka/pull/145]

> remove roundrobin identical topic constraint in consumer coordinator (old API)
> --
>
> Key: KAFKA-2434
> URL: https://issues.apache.org/jira/browse/KAFKA-2434
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Andrew Olson
>Assignee: Andrew Olson
> Fix For: 0.10.2.0
>
> Attachments: KAFKA-2434.patch
>
>
> The roundrobin strategy algorithm improvement made in KAFKA-2196 should be 
> applied to the original high-level consumer API as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4561) Ordering of operations in StreamThread.shutdownTasksAndState may void at-least-once guarantees

2017-01-03 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-4561:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 2281
[https://github.com/apache/kafka/pull/2281]

> Ordering of operations in StreamThread.shutdownTasksAndState may void 
> at-least-once guarantees
> --
>
> Key: KAFKA-4561
> URL: https://issues.apache.org/jira/browse/KAFKA-4561
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Damian Guy
>Assignee: Damian Guy
> Fix For: 0.10.2.0
>
>
> In {{shutdownTasksAndState}} we currently commit offsets as the first step. 
> If a subsequent step throws an exception, i.e, flushing the producer, then 
> this would violate the at-least-once guarantees.
> We need to commit after all other state has been flushed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4480) kafka-configs will execute the removal of an invalid property and not report an error

2016-12-31 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-4480:
-
   Resolution: Fixed
Fix Version/s: 0.10.2.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 2218
[https://github.com/apache/kafka/pull/2218]

> kafka-configs will execute the removal of an invalid property and not report 
> an error
> -
>
> Key: KAFKA-4480
> URL: https://issues.apache.org/jira/browse/KAFKA-4480
> Project: Kafka
>  Issue Type: Bug
>  Components: config
>Affects Versions: 0.10.0.0
> Environment: CentOS Linux release 7.2.1511 (Core)
> java version "1.8.0_102"
>Reporter: Justin Manchester
>Assignee: Vahid Hashemian
> Fix For: 0.10.2.0
>
>
> Problem:
> kafka-configs will execute the removal of an invalid property and not report 
> an error
> Steps to Reproduce:
> 1. Add a config property to a topic:
> kafka-configs --zookeeper localhost:2181 --entity-type topics --entity-name 
> test1 --alter --add-config max.message.bytes=128000
> 2. Confirm config is present:
> kafka-topics --zookeeper localhost:2181 --describe --topic test1
> Topic:test1 PartitionCount:1 ReplicationFactor:1 
> Configs:max.message.bytes=128000
>  Topic: test1 Partition: 0 Leader: 0 Replicas: 0 Isr: 0
> 3. Remove config:
> kafka-configs --zookeeper localhost:2181 --entity-type topics --entity-name 
> test1 --alter --delete-config  max.message.bytes=128000
> Updated config for topic: "test1".
> 4. Config is still present and no error is thrown:
> kafka-topics --zookeeper localhost:2181 --describe --topic test1
> Topic:test1 PartitionCount:1 ReplicationFactor:1 
> Configs:max.message.bytes=128000
>  Topic: test1 Partition: 0 Leader: 0 Replicas: 0 Isr: 0
> This is due to the "=128000" in the removal statement.
> Impact:
> 1. We would expect there to be an error statement if an invalid property is 
> specified for removal.  This can lead to unforeseen consequences in heavily 
> automated environments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4569) Transient failure in org.apache.kafka.clients.consumer.KafkaConsumerTest.testWakeupWithFetchDataAvailable

2016-12-23 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-4569:


 Summary: Transient failure in 
org.apache.kafka.clients.consumer.KafkaConsumerTest.testWakeupWithFetchDataAvailable
 Key: KAFKA-4569
 URL: https://issues.apache.org/jira/browse/KAFKA-4569
 Project: Kafka
  Issue Type: Sub-task
  Components: unit tests
Reporter: Guozhang Wang


One example is:

https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/370/testReport/junit/org.apache.kafka.clients.consumer/KafkaConsumerTest/testWakeupWithFetchDataAvailable/

{code}
Stacktrace

java.lang.AssertionError
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.fail(Assert.java:95)
at 
org.apache.kafka.clients.consumer.KafkaConsumerTest.testWakeupWithFetchDataAvailable(KafkaConsumerTest.java:679)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:239)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:377)
at 
org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
at 
org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(StoppableExecutorImpl.java:40)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3284) Consider removing beta label in security documentation

2016-12-22 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-3284:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 2286
[https://github.com/apache/kafka/pull/2286]

> Consider removing beta label in security documentation
> --
>
> Key: KAFKA-3284
> URL: https://issues.apache.org/jira/browse/KAFKA-3284
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.10.2.0
>
>
> We currently state that our security support is beta. It would be good to 
> remove that for 0.10.0.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4468) Correctly calculate the window end timestamp after read from state stores

2016-12-22 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15770284#comment-15770284
 ] 

Guozhang Wang commented on KAFKA-4468:
--

Since window_size for a specific window object is static and fixed, I think we 
do not need to serialize it into the rocksdb payload. Instead we can first 
return the {{TimeWindow}} from the deserializer with end set to a dummy first,  
and then replace it in the Windows class with the known window_size. 

Is that doable?

> Correctly calculate the window end timestamp after read from state stores
> -
>
> Key: KAFKA-4468
> URL: https://issues.apache.org/jira/browse/KAFKA-4468
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Bill Bejeck
>  Labels: architecture
>
> When storing the WindowedStore on the persistent KV store, we only use the 
> start timestamp of the window as part of the combo-key as (start-timestamp, 
> key). The reason that we do not add the end-timestamp as well is that we can 
> always calculate it from the start timestamp + window_length, and hence we 
> can save 8 bytes per key on the persistent KV store.
> However, after read it (via {{WindowedDeserializer}}) we do not set its end 
> timestamp correctly but just read it as an {{UnlimitedWindow}}. We should fix 
> this by calculating its end timestamp as mentioned above.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3587) LogCleaner fails due to incorrect offset map computation on a replica

2016-12-21 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-3587:
-
Fix Version/s: 0.9.0.2

> LogCleaner fails due to incorrect offset map computation on a replica
> -
>
> Key: KAFKA-3587
> URL: https://issues.apache.org/jira/browse/KAFKA-3587
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.1
> Environment: Linux
>Reporter: Kiran Pillarisetty
>Assignee: Edoardo Comar
> Fix For: 0.10.0.0, 0.9.0.2
>
> Attachments: 0001-POC-improving-deduping-segments.patch
>
>
> Log Cleaner fails to compact a segment even when the number of messages in it 
> is less than the offset map.
> In version 0.9.0.1, (LogCleaner.scala -> buildOffsetMap()), LogCleaner 
> computes segment size by subtracting segment's base offset from the latest 
> offset ("segmentSize = segment.nextOffset() - segment.baseOffset").  This 
> works fine until you create another replica. When you create a replica, it's 
> segment could contain data which is already compacted on other brokers. 
> Depending up on the type of data, offset difference could be too big, larger 
> than the offset map (maxDesiredMapSize), and that causes LogCleaner to fail 
> on that segment.
> Scenario:
> - Kafka 0.9.0.1
> - Cluster has two brokers.
> - Server.properties:
> log.cleaner.enable=true
> log.cleaner.dedupe.buffer.size=10485760 #10MB
> log.roll.ms=30
> delete.topic.enable=true
> log.cleanup.policy=compact
> Steps to reproduce:
> 1. Create a topic with replication-factor of 1.
> ./kafka-topics.sh --zookeeper=localhost:2181 --create --topic 
> test.log.compact.1M --partitions 1 --replication-factor 1 --config 
> cleanup.policy=compact --config segment.ms=30
> 2. Use kafka-console-producer.sh to produce a single message with the 
> following key:
> LC1,{"test": "xyz"}
> 3. Use  kafka-console-producer.sh to produce a large number of messages with 
> the following key:
> LC2,{"test": "abc"}
> 4. Let log cleaner run. Make sure log is compacted.  Verify with:
>  ./kafka-run-class.sh kafka.tools.DumpLogSegments  --files 
> .log  --print-data-log
> Dumping .log
> Starting offset: 0
> offset: 0 position: 0 isvalid: true payloadsize: 11 magic: 0 compresscodec: 
> NoCompressionCodec crc: 3067045277 keysize: 11 key: LC1 payload: {"test": 
> "xyz"}
> offset: 7869818 position: 48 isvalid: true payloadsize: 11 magic: 0 
> compresscodec: NoCompressionCodec crc: 2668089711 keysize: 11 key: LC2 
> payload: {"test": "abc"}
> 5.  Increase Replication Factor to 2.  Followed these steps: 
> http://kafka.apache.org/documentation.html#basic_ops_increase_replication_factor
> 6. Notice that log cleaner fails to compact the newly created replica with 
> the following error.
> [2016-04-18 14:49:45,599] ERROR [kafka-log-cleaner-thread-0], Error due to  
> (kafka.log.LogCleaner)
> java.lang.IllegalArgumentException: requirement failed: 7206179 messages in 
> segment test.log.compact.1M-0/.log but offset map can fit 
> only 393215. You can increase log.cleaner.dedupe.buffer.size or decrease 
> log.cleaner.threads
> at scala.Predef$.require(Predef.scala:219)
> at 
> kafka.log.Cleaner$$anonfun$buildOffsetMap$4.apply(LogCleaner.scala:584)
> at 
> kafka.log.Cleaner$$anonfun$buildOffsetMap$4.apply(LogCleaner.scala:580)
> at 
> scala.collection.immutable.Stream$StreamWithFilter.foreach(Stream.scala:570)
> at kafka.log.Cleaner.buildOffsetMap(LogCleaner.scala:580)
> at kafka.log.Cleaner.clean(LogCleaner.scala:322)
> at 
> kafka.log.LogCleaner$CleanerThread.cleanOrSleep(LogCleaner.scala:230)
> at kafka.log.LogCleaner$CleanerThread.doWork(LogCleaner.scala:208)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)
> [2016-04-18 14:49:45,601] INFO [kafka-log-cleaner-thread-0], Stopped  
> (kafka.log.LogCleaner)
> 7. Examine the entries in the replica segment:
> ./kafka-run-class.sh kafka.tools.DumpLogSegments --files 
> .log  --print-data-log
> There are only 218418 messages in that segment.
> However, Log Cleaner seems to think that there are 7206179 messages in that 
> segment (as per the above error)
> Error stems from this line in LogCleaner.scala:
> """val segmentSize = segment.nextOffset() - segment.baseOffset"""
> In Replica's log segment file ( .log), ending offset is 
> 7206178. Beginning offset is 0.  That makes Log Cleaner think that there are 
> 7206179 messages in that segment although there are only 218418 messages in 
> it.
> IMO,  to address this kind of scenario, LogCleaner.scala should check for the 
> number of messages in the segment, instead of subtracting beginning offset 
> from 

[jira] [Updated] (KAFKA-3564) Count metric always increments by 1.0

2016-12-21 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-3564:
-
Resolution: Not A Bug
Status: Resolved  (was: Patch Available)

Closing as not a bug for now. Please feel free to re-open if there are any 
follow-up questions.

> Count metric always increments by 1.0
> -
>
> Key: KAFKA-3564
> URL: https://issues.apache.org/jira/browse/KAFKA-3564
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.0
>Reporter: Michael Coon
>Assignee: Kim Christensen
>
> The Count metric's update method always increments its value by 1.0 instead 
> of the value passed to it. If this is by design, it's misleading as I want to 
> be able to count based on values I send to the record method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2620) Introduce Scalariform

2016-12-21 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-2620:
-
Reviewer: Gwen Shapira

> Introduce Scalariform
> -
>
> Key: KAFKA-2620
> URL: https://issues.apache.org/jira/browse/KAFKA-2620
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> Many of our reviews include nit comments related to Scala style. Adding 
> [Scalariform|https://github.com/daniel-trinh/scalariform] allows us to 
> reformat the code based on configurable standards at build time, ensuring 
> uniform readability and a short review/commit cycle. 
> I expect this will have some discussion around the rules we would like to 
> include, and if we actually want to adopt this. I will submit a sample patch 
> to start the discussion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2442) QuotasTest should not fail when cpu is busy

2016-12-21 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15767720#comment-15767720
 ] 

Guozhang Wang commented on KAFKA-2442:
--

[~lindong] Is this still a valid issue?

> QuotasTest should not fail when cpu is busy
> ---
>
> Key: KAFKA-2442
> URL: https://issues.apache.org/jira/browse/KAFKA-2442
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Dong Lin
>Assignee: Aditya Auradkar
>  Labels: transient-unit-test-failure
>
> We observed that testThrottledProducerConsumer in QuotasTest may fail or 
> succeed randomly. It appears that the test may fail when the system is slow. 
> We can add timer in the integration test to avoid random failure.
> See an example failure at 
> https://builds.apache.org/job/kafka-trunk-git-pr/166/console for patch 
> https://github.com/apache/kafka/pull/142.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2170) 10 LogTest cases failed for file.renameTo failed under windows

2016-12-21 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15767712#comment-15767712
 ] 

Guozhang Wang commented on KAFKA-2170:
--

Is this PR still valid in tackling this issue? 
https://github.com/apache/kafka/pull/154

> 10 LogTest cases failed for  file.renameTo failed under windows
> ---
>
> Key: KAFKA-2170
> URL: https://issues.apache.org/jira/browse/KAFKA-2170
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.10.1.0
> Environment: Windows
>Reporter: Honghai Chen
>Assignee: Jay Kreps
>
> get latest code from trunk, then run test 
> gradlew  -i core:test --tests kafka.log.LogTest
> Got 10 cases failed for same reason:
> kafka.common.KafkaStorageException: Failed to change the log file suffix from 
>  to .deleted for log segment 0
>   at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:259)
>   at kafka.log.Log.kafka$log$Log$$asyncDeleteSegment(Log.scala:756)
>   at kafka.log.Log.kafka$log$Log$$deleteSegment(Log.scala:747)
>   at kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:514)
>   at kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:514)
>   at scala.collection.immutable.List.foreach(List.scala:318)
>   at kafka.log.Log.deleteOldSegments(Log.scala:514)
>   at kafka.log.LogTest.testAsyncDelete(LogTest.scala:633)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:601)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:44)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:41)
>   at org.junit.runners.ParentRunner$1.evaluate(ParentRunner.java:173)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:220)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:86)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:49)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:69)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:48)
>   at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:601)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at $Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:105)
>   at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:601)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> 

[jira] [Commented] (KAFKA-1194) The kafka broker cannot delete the old log files after the configured time

2016-12-21 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15767709#comment-15767709
 ] 

Guozhang Wang commented on KAFKA-1194:
--

Is this PR trying to tackle this issue at all? 
https://github.com/apache/kafka/pull/154

> The kafka broker cannot delete the old log files after the configured time
> --
>
> Key: KAFKA-1194
> URL: https://issues.apache.org/jira/browse/KAFKA-1194
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.8.1
> Environment: window
>Reporter: Tao Qin
>Assignee: Jay Kreps
>  Labels: features, patch
> Fix For: 0.10.2.0
>
> Attachments: KAFKA-1194.patch, Untitled.jpg, kafka-1194-v1.patch, 
> kafka-1194-v2.patch, screenshot-1.png
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> We tested it in windows environment, and set the log.retention.hours to 24 
> hours.
> # The minimum age of a log file to be eligible for deletion
> log.retention.hours=24
> After several days, the kafka broker still cannot delete the old log file. 
> And we get the following exceptions:
> [2013-12-19 01:57:38,528] ERROR Uncaught exception in scheduled task 
> 'kafka-log-retention' (kafka.utils.KafkaScheduler)
> kafka.common.KafkaStorageException: Failed to change the log file suffix from 
>  to .deleted for log segment 1516723
>  at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:249)
>  at kafka.log.Log.kafka$log$Log$$asyncDeleteSegment(Log.scala:638)
>  at kafka.log.Log.kafka$log$Log$$deleteSegment(Log.scala:629)
>  at kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:418)
>  at kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:418)
>  at 
> scala.collection.LinearSeqOptimized$class.foreach(LinearSeqOptimized.scala:59)
>  at scala.collection.immutable.List.foreach(List.scala:76)
>  at kafka.log.Log.deleteOldSegments(Log.scala:418)
>  at 
> kafka.log.LogManager.kafka$log$LogManager$$cleanupExpiredSegments(LogManager.scala:284)
>  at 
> kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:316)
>  at 
> kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:314)
>  at 
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:743)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:772)
>  at 
> scala.collection.JavaConversions$JIteratorWrapper.foreach(JavaConversions.scala:573)
>  at scala.collection.IterableLike$class.foreach(IterableLike.scala:73)
>  at 
> scala.collection.JavaConversions$JListWrapper.foreach(JavaConversions.scala:615)
>  at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:742)
>  at kafka.log.LogManager.cleanupLogs(LogManager.scala:314)
>  at 
> kafka.log.LogManager$$anonfun$startup$1.apply$mcV$sp(LogManager.scala:143)
>  at kafka.utils.KafkaScheduler$$anon$1.run(KafkaScheduler.scala:100)
>  at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>  at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  at java.lang.Thread.run(Thread.java:724)
> I think this error happens because kafka tries to rename the log file when it 
> is still opened.  So we should close the file first before rename.
> The index file uses a special data structure, the MappedByteBuffer. Javadoc 
> describes it as:
> A mapped byte buffer and the file mapping that it represents remain valid 
> until the buffer itself is garbage-collected.
> Fortunately, I find a forceUnmap function in kafka code, and perhaps it can 
> be used to free the MappedByteBuffer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2358) Cluster collection returning methods should never return null

2016-12-21 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-2358:
-
Assignee: Stevo Slavic

> Cluster collection returning methods should never return null
> -
>
> Key: KAFKA-2358
> URL: https://issues.apache.org/jira/browse/KAFKA-2358
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Stevo Slavic
>Assignee: Stevo Slavic
>Priority: Minor
>
> {{KafkaConsumer.partitionsFor}} method by it's signature returns a 
> {{List}}. Problem is that in case (metadata for) topic does 
> not exist, current implementation will return null, which is considered a bad 
> practice - instead of null it should return empty list.
> Root cause is that the Cluster collection returning methods are returning 
> null.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2341) Need Standard Deviation Metrics in MetricsBench

2016-12-21 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15767672#comment-15767672
 ] 

Guozhang Wang commented on KAFKA-2341:
--

[~sebadiaz] Added you to the contributor list.

> Need Standard Deviation Metrics in MetricsBench
> ---
>
> Key: KAFKA-2341
> URL: https://issues.apache.org/jira/browse/KAFKA-2341
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 0.8.2.1
>Reporter: sebastien diaz
>Assignee: sebastien diaz
>Priority: Minor
>
> The standard deviation is a measure that is used to quantify the amount of 
> variation or dispersion of a set of data values.
> Very useful. Could be added to other sensors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2341) Need Standard Deviation Metrics in MetricsBench

2016-12-21 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-2341:
-
Assignee: sebastien diaz

> Need Standard Deviation Metrics in MetricsBench
> ---
>
> Key: KAFKA-2341
> URL: https://issues.apache.org/jira/browse/KAFKA-2341
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 0.8.2.1
>Reporter: sebastien diaz
>Assignee: sebastien diaz
>Priority: Minor
>
> The standard deviation is a measure that is used to quantify the amount of 
> variation or dispersion of a set of data values.
> Very useful. Could be added to other sensors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4564) When the destination brokers are down or misconfigured in config, Streams should fail fast

2016-12-20 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-4564:


 Summary: When the destination brokers are down or misconfigured in 
config, Streams should fail fast
 Key: KAFKA-4564
 URL: https://issues.apache.org/jira/browse/KAFKA-4564
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: Guozhang Wang


Today if Kafka is down or users misconfigure the bootstrap list, Streams may 
just hangs for a while without any error messages even with the log4j enabled, 
which is quite confusing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4198) Transient test failure: ConsumerBounceTest.testConsumptionWithBrokerFailures

2016-12-20 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15765468#comment-15765468
 ] 

Guozhang Wang commented on KAFKA-4198:
--

Saw another case of this failure, but with different exception message: 
https://builds.apache.org/job/kafka-pr-jdk8-scala2.12/279/console

{code}
kafka.api.ConsumerBounceTest > testConsumptionWithBrokerFailures FAILED
java.lang.IllegalArgumentException: You can only check the position for 
partitions assigned to this consumer.
at 
org.apache.kafka.clients.consumer.KafkaConsumer.position(KafkaConsumer.java:1271)
at 
kafka.api.ConsumerBounceTest.consumeWithBrokerFailures(ConsumerBounceTest.scala:96)
at 
kafka.api.ConsumerBounceTest.testConsumptionWithBrokerFailures(ConsumerBounceTest.scala:69)
{code}

> Transient test failure: ConsumerBounceTest.testConsumptionWithBrokerFailures
> 
>
> Key: KAFKA-4198
> URL: https://issues.apache.org/jira/browse/KAFKA-4198
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>  Labels: transient-unit-test-failure
>
> The issue seems to be that we call `startup` while `shutdown` is still taking 
> place.
> {code}
> java.lang.AssertionError: expected:<107> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at org.junit.Assert.assertEquals(Assert.java:631)
>   at 
> kafka.api.ConsumerBounceTest$$anonfun$consumeWithBrokerFailures$2.apply(ConsumerBounceTest.scala:91)
>   at 
> kafka.api.ConsumerBounceTest$$anonfun$consumeWithBrokerFailures$2.apply(ConsumerBounceTest.scala:90)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:893)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at 
> kafka.api.ConsumerBounceTest.consumeWithBrokerFailures(ConsumerBounceTest.scala:90)
>   at 
> kafka.api.ConsumerBounceTest.testConsumptionWithBrokerFailures(ConsumerBounceTest.scala:70)
> {code}
> {code}
> java.lang.IllegalStateException: Kafka server is still shutting down, cannot 
> re-start!
>   at kafka.server.KafkaServer.startup(KafkaServer.scala:184)
>   at 
> kafka.integration.KafkaServerTestHarness$$anonfun$restartDeadBrokers$2.apply$mcVI$sp(KafkaServerTestHarness.scala:117)
>   at 
> kafka.integration.KafkaServerTestHarness$$anonfun$restartDeadBrokers$2.apply(KafkaServerTestHarness.scala:116)
>   at 
> kafka.integration.KafkaServerTestHarness$$anonfun$restartDeadBrokers$2.apply(KafkaServerTestHarness.scala:116)
>   at 
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
>   at scala.collection.immutable.Range.foreach(Range.scala:160)
>   at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
>   at 
> kafka.integration.KafkaServerTestHarness$class.restartDeadBrokers(KafkaServerTestHarness.scala:116)
>   at 
> kafka.api.ConsumerBounceTest.restartDeadBrokers(ConsumerBounceTest.scala:34)
>   at 
> kafka.api.ConsumerBounceTest$BounceBrokerScheduler.doWork(ConsumerBounceTest.scala:158)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4540) Suspended tasks that are not assigned to the StreamThread need to be closed before new active and standby tasks are created

2016-12-20 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-4540:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 2266
[https://github.com/apache/kafka/pull/2266]

> Suspended tasks that are not assigned to the StreamThread need to be closed 
> before new active and standby tasks are created
> ---
>
> Key: KAFKA-4540
> URL: https://issues.apache.org/jira/browse/KAFKA-4540
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Damian Guy
>Assignee: Damian Guy
> Fix For: 0.10.2.0
>
>
> When partition assignment happens we first try and add the active tasks and 
> then add the standby tasks. The problem with this is that a new active task 
> might already be an existing suspended standby task. if this is the case then 
> when the active task initialises it will throw an exception from RocksDB:
> {{Caused by: org.rocksdb.RocksDBException: IO error: lock 
> /tmp/kafka-streams-7071/kafka-music-charts/1_1/rocksdb/all-songs/LOCK: No 
> locks available}}
> We need to make sure we have removed an closed any no-longer assigned 
> Suspended tasks before creating new tasks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4473) RecordCollector should handle retriable exceptions more strictly

2016-12-20 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-4473:
-
   Resolution: Fixed
Fix Version/s: 0.10.2.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 2249
[https://github.com/apache/kafka/pull/2249]

> RecordCollector should handle retriable exceptions more strictly
> 
>
> Key: KAFKA-4473
> URL: https://issues.apache.org/jira/browse/KAFKA-4473
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Thomas Schulz
>Assignee: Damian Guy
>Priority: Critical
>  Labels: architecture
> Fix For: 0.10.2.0
>
>
> see: https://groups.google.com/forum/#!topic/confluent-platform/DT5bk1oCVk8
> There is probably a bug in the RecordCollector as described in my detailed 
> Cluster test published in the aforementioned post.
> The class RecordCollector has the following behavior:
> - if there is no exception, add the message offset to a map
> - otherwise, do not add the message offset and instead log the above statement
> Is it possible that this offset map contains the latest offset to commit? If 
> so, a message that fails might be overriden be a successful (later) message 
> and the consumer commits every message up to the latest offset?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4559) Add a site search bar on the Web site

2016-12-19 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-4559:


 Summary: Add a site search bar on the Web site
 Key: KAFKA-4559
 URL: https://issues.apache.org/jira/browse/KAFKA-4559
 Project: Kafka
  Issue Type: Bug
  Components: website
Reporter: Guozhang Wang


As titled, as we are breaking the "documentation" html into sub spaces and sub 
pages, people cannot simply use `control + f` on that page, and a site-scope 
search bar would help in this case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4534) StreamPartitionAssignor only ever updates the partitionsByHostState and metadataWithInternalTopics once.

2016-12-19 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-4534:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 2256
[https://github.com/apache/kafka/pull/2256]

> StreamPartitionAssignor only ever updates the partitionsByHostState and 
> metadataWithInternalTopics once.
> 
>
> Key: KAFKA-4534
> URL: https://issues.apache.org/jira/browse/KAFKA-4534
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Damian Guy
>Assignee: Damian Guy
> Fix For: 0.10.2.0
>
>
> StreamPartitionAssignor only ever updates the partitionsByHostState and 
> metadataWithInternalTopics once. This results in incorrect metadata on 
> rebalances.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4539) StreamThread is not correctly creating StandbyTasks

2016-12-15 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-4539:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 2255
[https://github.com/apache/kafka/pull/2255]

> StreamThread is not correctly creating  StandbyTasks
> 
>
> Key: KAFKA-4539
> URL: https://issues.apache.org/jira/browse/KAFKA-4539
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Damian Guy
>Assignee: Damian Guy
> Fix For: 0.10.2.0
>
>
> Fails because {{createStandbyTask(..)}} can return null if the topology for 
> the {{TaskId}} doesn't have any state stores.
> {code}
> [2016-12-14 12:20:29,758] ERROR [StreamThread-1] User provided listener 
> org.apache.kafka.streams.processor.internals.StreamThread$1 for group 
> kafka-music-charts failed on partition assignment 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
> java.lang.NullPointerException
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread$StandbyTaskCreator.createTask(StreamThread.java:1241)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread$AbstractTaskCreator.retryWithBackoff(StreamThread.java:1188)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.addStandbyTasks(StreamThread.java:915)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.access$400(StreamThread.java:72)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread$1.onPartitionsAssigned(StreamThread.java:239)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:230)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:314)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:278)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:261)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1022)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:987)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:568)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:357)
> {code}
> Also fails because the checkpointedOffsets from the newly created 
> {{StandbyTask}} aren't added to the offsets map, so the partitions don't get 
> assigned. We then get:



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4539) StreamThread is not correctly creating StandbyTasks

2016-12-15 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-4539:
-
Description: 
Fails because {{createStandbyTask(..)}} can return null if the topology for the 
{{TaskId}} doesn't have any state stores.

{code}
[2016-12-14 12:20:29,758] ERROR [StreamThread-1] User provided listener 
org.apache.kafka.streams.processor.internals.StreamThread$1 for group 
kafka-music-charts failed on partition assignment 
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
java.lang.NullPointerException
at 
org.apache.kafka.streams.processor.internals.StreamThread$StandbyTaskCreator.createTask(StreamThread.java:1241)
at 
org.apache.kafka.streams.processor.internals.StreamThread$AbstractTaskCreator.retryWithBackoff(StreamThread.java:1188)
at 
org.apache.kafka.streams.processor.internals.StreamThread.addStandbyTasks(StreamThread.java:915)
at 
org.apache.kafka.streams.processor.internals.StreamThread.access$400(StreamThread.java:72)
at 
org.apache.kafka.streams.processor.internals.StreamThread$1.onPartitionsAssigned(StreamThread.java:239)
at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:230)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:314)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:278)
at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:261)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1022)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:987)
at 
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:568)
at 
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:357)
{code}

Also fails because the checkpointedOffsets from the newly created 
{{StandbyTask}} aren't added to the offsets map, so the partitions don't get 
assigned. We then get:


  was:
Fails because {{createStandbyTask(..)}} can return null fi the topology for the 
{{TaskId}} doesn't have any state stores.

{code}
[2016-12-14 12:20:29,758] ERROR [StreamThread-1] User provided listener 
org.apache.kafka.streams.processor.internals.StreamThread$1 for group 
kafka-music-charts failed on partition assignment 
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
java.lang.NullPointerException
at 
org.apache.kafka.streams.processor.internals.StreamThread$StandbyTaskCreator.createTask(StreamThread.java:1241)
at 
org.apache.kafka.streams.processor.internals.StreamThread$AbstractTaskCreator.retryWithBackoff(StreamThread.java:1188)
at 
org.apache.kafka.streams.processor.internals.StreamThread.addStandbyTasks(StreamThread.java:915)
at 
org.apache.kafka.streams.processor.internals.StreamThread.access$400(StreamThread.java:72)
at 
org.apache.kafka.streams.processor.internals.StreamThread$1.onPartitionsAssigned(StreamThread.java:239)
at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:230)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:314)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:278)
at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:261)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1022)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:987)
at 
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:568)
at 
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:357)
{code}

Also fails because the checkpointedOffsets from the newly created 
{{StandbyTask}} aren't added to the offsets map, so the partitions don't get 
assigned. We then get:



> StreamThread is not correctly creating  StandbyTasks
> 
>
> Key: KAFKA-4539
> URL: https://issues.apache.org/jira/browse/KAFKA-4539
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Damian Guy
>Assignee: Damian Guy
> Fix For: 0.10.2.0
>
>
> Fails because {{createStandbyTask(..)}} can return null if the topology for 
> the {{TaskId}} doesn't have any state stores.
> {code}
> [2016-12-14 12:20:29,758] ERROR [StreamThread-1] User provided listener 
> 

[jira] [Resolved] (KAFKA-4537) StreamPartitionAssignor incorrectly adds standby partitions to the partitionsByHostState map

2016-12-15 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-4537.
--
Resolution: Fixed

Issue resolved by pull request 2254
[https://github.com/apache/kafka/pull/2254]

> StreamPartitionAssignor incorrectly adds standby partitions to the 
> partitionsByHostState map
> 
>
> Key: KAFKA-4537
> URL: https://issues.apache.org/jira/browse/KAFKA-4537
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Damian Guy
>Assignee: Damian Guy
> Fix For: 0.10.2.0
>
>
> If a KafkaStreams app is using Standby Tasks then StreamPartitionAssignor 
> will add the standby partitions to the partitionsByHostState map for each 
> host. This is incorrect as the partitionHostState map is used to resolve 
> which host is hosting a particular store for a key. 
> The result is that doing metadata lookups for interactive queries can return 
> an incorrect host



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4540) Suspended tasks that are not assigned to the StreamThread need to be closed before new active and standby tasks are created

2016-12-15 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752887#comment-15752887
 ] 

Guozhang Wang commented on KAFKA-4540:
--

Thanks for the explanation. Makes sense.

> Suspended tasks that are not assigned to the StreamThread need to be closed 
> before new active and standby tasks are created
> ---
>
> Key: KAFKA-4540
> URL: https://issues.apache.org/jira/browse/KAFKA-4540
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Damian Guy
>Assignee: Damian Guy
> Fix For: 0.10.2.0
>
>
> When partition assignment happens we first try and add the active tasks and 
> then add the standby tasks. The problem with this is that a new active task 
> might already be an existing suspended standby task. if this is the case then 
> when the active task initialises it will throw an exception from RocksDB:
> {{Caused by: org.rocksdb.RocksDBException: IO error: lock 
> /tmp/kafka-streams-7071/kafka-music-charts/1_1/rocksdb/all-songs/LOCK: No 
> locks available}}
> We need to make sure we have removed an closed any no-longer assigned 
> Suspended tasks before creating new tasks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-4529) tombstone may be removed earlier than it should

2016-12-15 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-4529.
--
Resolution: Fixed

Marked as fixed for the 0.10.1.1 release process.

> tombstone may be removed earlier than it should
> ---
>
> Key: KAFKA-4529
> URL: https://issues.apache.org/jira/browse/KAFKA-4529
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.0
>Reporter: Jun Rao
>Assignee: Jiangjie Qin
> Fix For: 0.10.1.1
>
>
> As part of KIP-33, we introduced a regression on how tombstone is removed in 
> a compacted topic. We want to delay the removal of a tombstone to avoid the 
> case that a reader first reads a non-tombstone message on a key and then 
> doesn't see the tombstone for the key because it's deleted too quickly. So, a 
> tombstone is supposed to only be removed from a compacted topic after the 
> tombstone is part of the cleaned portion of the log after delete.retention.ms.
> Before KIP-33, deleteHorizonMs in LogCleaner is calculated based on the last 
> modified time, which is monotonically increasing from old to new segments. 
> With KIP-33, deleteHorizonMs is calculated based on the message timestamp, 
> which is not necessarily monotonically increasing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4528) Failure in kafka.producer.ProducerTest.testAsyncSendCanCorrectlyFailWithTimeout

2016-12-14 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15749479#comment-15749479
 ] 

Guozhang Wang commented on KAFKA-4528:
--

Another observed failure: 
https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/139/console

> Failure in 
> kafka.producer.ProducerTest.testAsyncSendCanCorrectlyFailWithTimeout
> ---
>
> Key: KAFKA-4528
> URL: https://issues.apache.org/jira/browse/KAFKA-4528
> Project: Kafka
>  Issue Type: Sub-task
>  Components: unit tests
>Reporter: Guozhang Wang
>  Labels: newbie
>
> I have seen this failure a few times in the past few days, worth 
> investigating.
> Example:
> https://builds.apache.org/job/kafka-pr-jdk8-scala2.12/79/testReport/junit/kafka.producer/ProducerTest/testAsyncSendCanCorrectlyFailWithTimeout/
> {code}
> Stacktrace
> java.lang.AssertionError: Message set should have 1 message
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> kafka.producer.ProducerTest.testAsyncSendCanCorrectlyFailWithTimeout(ProducerTest.scala:313)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
>   at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:377)
>   at 
> org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
>   at 
> 

[jira] [Commented] (KAFKA-4540) Suspended tasks that are not assigned to the StreamThread need to be closed before new active and standby tasks are created

2016-12-14 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15749465#comment-15749465
 ] 

Guozhang Wang commented on KAFKA-4540:
--

Is this already resolved as part of 
https://issues.apache.org/jira/browse/KAFKA-4509? In the patch we reorder the 
operations to first close all the suspended tasks and then try to create new 
tasks.

> Suspended tasks that are not assigned to the StreamThread need to be closed 
> before new active and standby tasks are created
> ---
>
> Key: KAFKA-4540
> URL: https://issues.apache.org/jira/browse/KAFKA-4540
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Damian Guy
>Assignee: Damian Guy
> Fix For: 0.10.2.0
>
>
> When partition assignment happens we first try and add the active tasks and 
> then add the standby tasks. The problem with this is that a new active task 
> might already be an existing suspended standby task. if this is the case then 
> when the active task initialises it will throw an exception from RocksDB:
> {{Caused by: org.rocksdb.RocksDBException: IO error: lock 
> /tmp/kafka-streams-7071/kafka-music-charts/1_1/rocksdb/all-songs/LOCK: No 
> locks available}}
> We need to make sure we have removed an closed any no-longer assigned 
> Suspended tasks before creating new tasks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-4532) StateStores can be connected to the wrong source topic resulting in incorrect metadata returned from IQ

2016-12-13 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-4532.
--
Resolution: Fixed

Issue resolved by pull request 2250
[https://github.com/apache/kafka/pull/2250]

> StateStores can be connected to the wrong source topic resulting in incorrect 
> metadata returned from IQ
> ---
>
> Key: KAFKA-4532
> URL: https://issues.apache.org/jira/browse/KAFKA-4532
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.1.0, 0.10.1.1, 0.10.2.0
>Reporter: Damian Guy
>Assignee: Damian Guy
> Fix For: 0.10.2.0
>
>
> When building a topology with tables and StateStores, the StateStores are 
> mapped to the source topic names. This map is retrieved via 
> {{TopologyBuilder.stateStoreNameToSourceTopics()}} and is used in Interactive 
> Queries to find the source topics and partitions when resolving the 
> partitions that particular keys will be in.
> There is an issue where by this mapping for a table that is originally 
> created with {{builder.table("topic", "table");}}, and then is subsequently 
> used in a join, is changed to the join topic. This is because the mapping is 
> updated during the call to {{topology.connectProcessorAndStateStores(..)}}. 
> In the case that the {{stateStoreNameToSourceTopics}} Map already has a value 
> for the state store name it should not update the Map.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4509) Task reusage on rebalance fails for threads on same host

2016-12-13 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-4509:
-
   Resolution: Fixed
Fix Version/s: 0.10.2.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 2233
[https://github.com/apache/kafka/pull/2233]

> Task reusage on rebalance fails for threads on same host
> 
>
> Key: KAFKA-4509
> URL: https://issues.apache.org/jira/browse/KAFKA-4509
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
> Fix For: 0.10.2.0
>
>
> In https://issues.apache.org/jira/browse/KAFKA-3559 task reusage on rebalance 
> was introduces as a performance optimization. Instead of closing a task on 
> rebalance (ie, {{onPartitionsRevoked()}}, it only get's suspended for a 
> potential reuse in {{onPartitionsAssigned()}}. Only if a task cannot be 
> reused, it will eventually get closed in {{onPartitionsAssigned()}}.
> This mechanism can fail, if multiple {{StreamThreads}} run in the same host 
> (same or different JVM). The scenario is as follows:
>  - assume 2 running threads A and B
>  - assume 3 tasks t1, t2, t3
>  - assignment: A-(t1,t2) and B-(t3)
>  - on the same host, a new single threaded Stream application (same app-id) 
> gets started (thread C)
>  - on rebalance, t2 (could also be t1 -- does not matter) will be moved from 
> A to C
>  - as assignment is only sticky base on an heurictic t1 can sometimes be 
> assigned to B, too -- and t3 get's assigned to A (thre is a race condition if 
> this "task flipping" happens or not)
>  - on revoke, A will suspend task t1 and t2 (not releasing any locks)
>  - on assign
> - A tries to create t3 but as B did not release it yet, A dies with an 
> "cannot get lock" exception
> - B tries to create t1 but as A did not release it yet, B dies with an 
> "cannot get lock" exception
> - as A and B trie to create the task first, this will always fail if task 
> flipping happened
>- C tries to create t2 but A did not release t2 lock yet (race condition) 
> and C dies with an exception (this could even happen without "task flipping" 
> between A and B)
> We want to fix this, by:
>   # first release unassigned suspended tasks in {{onPartitionsAssignment()}}, 
> and afterward create new tasks (this fixes the "task flipping" issue)
>   # use a "backoff and retry mechanism" if a task cannot be created (to 
> handle release-create race condition between different threads)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4516) When a CachingStateStore is closed it should clear its associated NamedCache. Subsequent queries should throw InvalidStateStoreException

2016-12-12 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-4516:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 2235
[https://github.com/apache/kafka/pull/2235]

> When a CachingStateStore is closed it should clear its associated NamedCache. 
> Subsequent queries should throw InvalidStateStoreException
> 
>
> Key: KAFKA-4516
> URL: https://issues.apache.org/jira/browse/KAFKA-4516
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Damian Guy
>Assignee: Damian Guy
> Fix For: 0.10.2.0
>
>
> When close is called on a CachingStateStore we don't release the memory it is 
> using in the Cache. This could result in the cache being full of data that is 
> redundant. We also still allow queries on the CachedStateStore even though it 
> has been closed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4528) Failure in kafka.producer.ProducerTest.testAsyncSendCanCorrectlyFailWithTimeout

2016-12-12 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-4528:


 Summary: Failure in 
kafka.producer.ProducerTest.testAsyncSendCanCorrectlyFailWithTimeout
 Key: KAFKA-4528
 URL: https://issues.apache.org/jira/browse/KAFKA-4528
 Project: Kafka
  Issue Type: Sub-task
  Components: unit tests
Reporter: Guozhang Wang


I have seen this failure a few times in the past few days, worth investigating.

Example:

https://builds.apache.org/job/kafka-pr-jdk8-scala2.12/79/testReport/junit/kafka.producer/ProducerTest/testAsyncSendCanCorrectlyFailWithTimeout/

{code}
Stacktrace

java.lang.AssertionError: Message set should have 1 message
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
kafka.producer.ProducerTest.testAsyncSendCanCorrectlyFailWithTimeout(ProducerTest.scala:313)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:377)
at 
org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
at 
org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(StoppableExecutorImpl.java:40)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Standard Output

[2016-12-12 18:51:44,634] ERROR ZKShutdownHandler is not registered, so 
ZooKeeper server won't take any action on ERROR or SHUTDOWN server state 
changes 

[jira] [Updated] (KAFKA-4510) StreamThread must finish rebalance in state PENDING_SHUTDOWN

2016-12-10 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-4510:
-
   Resolution: Fixed
Fix Version/s: 0.10.2.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 2227
[https://github.com/apache/kafka/pull/2227]

> StreamThread must finish rebalance in state PENDING_SHUTDOWN
> 
>
> Key: KAFKA-4510
> URL: https://issues.apache.org/jira/browse/KAFKA-4510
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
> Fix For: 0.10.2.0
>
>
> If a Streams application runs with multiple threads within one JVM and the 
> application gets stopped. this can triggers a rebalance when the first 
> threads finishes processing because not all thread shut down at the same time.
> Because we try to reuse tasks, on rebalance task are not closed immediately 
> in order for a potential reuse (if a task gets assigned the its original 
> thread). However, if a thread is in state {{PENDING_SHUTDOWN}} it does finish 
> the rebalance operation completely and thus does not release the suspended 
> task and the application exits with not all locks released.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4476) Kafka Streams gets stuck if metadata is missing

2016-12-10 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-4476:
-
   Resolution: Fixed
Fix Version/s: 0.10.2.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 2209
[https://github.com/apache/kafka/pull/2209]

> Kafka Streams gets stuck if metadata is missing
> ---
>
> Key: KAFKA-4476
> URL: https://issues.apache.org/jira/browse/KAFKA-4476
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>Priority: Critical
> Fix For: 0.10.2.0
>
>
> When a Kafka Streams application gets started for the first time, it can 
> happen that some topic metadata is missing when 
> {{StreamPartitionAssigner#assign()}} is called on the group leader instance. 
> This can result in an infinite loop within 
> {{StreamPartitionAssigner#assign()}}. This issue was detected by 
> {{ResetIntegrationTest}} that does have a transient timeout failure (c.f. 
> https://issues.apache.org/jira/browse/KAFKA-4058 -- this issue was re-opened 
> multiple times as the problem was expected to be in the test -- however, that 
> is not the case).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


<    5   6   7   8   9   10   11   12   13   14   >