[jira] [Commented] (KAFKA-2839) Kafka connect log test failing

2015-12-09 Thread jin xing (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15048431#comment-15048431
 ] 

jin xing commented on KAFKA-2839:
-

[~ewencp]
No problem, hope to have chance to contribute to Kafka : )

> Kafka connect log test failing
> --
>
> Key: KAFKA-2839
> URL: https://issues.apache.org/jira/browse/KAFKA-2839
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Gwen Shapira
>Assignee: jin xing
> Fix For: 0.9.1.0
>
>
> org.apache.kafka.connect.util.KafkaBasedLogTest > testSendAndReadToEnd FAILED
> org.junit.ComparisonFailure: expected: but was:
> at org.junit.Assert.assertEquals(Assert.java:115)
> at org.junit.Assert.assertEquals(Assert.java:144)
> at 
> org.apache.kafka.connect.util.KafkaBasedLogTest.testSendAndReadToEnd(KafkaBasedLogTest.java:312)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #215

2015-12-09 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2962: stream-table table-table joins

--
[...truncated 1425 lines...]

kafka.api.PlaintextConsumerTest > testPartitionsForAutoCreate PASSED

kafka.api.PlaintextConsumerTest > testShrinkingTopicSubscriptions PASSED

kafka.api.PlaintextConsumerTest > testMultiConsumerSessionTimeoutOnStopPolling 
PASSED

kafka.api.PlaintextConsumerTest > testPartitionsForInvalidTopic PASSED

kafka.api.PlaintextConsumerTest > testSeek PASSED

kafka.api.PlaintextConsumerTest > testMultiConsumerSessionTimeoutOnClose PASSED

kafka.api.PlaintextConsumerTest > testFetchRecordTooLarge PASSED

kafka.api.PlaintextConsumerTest > testMultiConsumerDefaultAssignment PASSED

kafka.api.PlaintextConsumerTest > testAutoCommitOnClose PASSED

kafka.api.PlaintextConsumerTest > testExpandingTopicSubscriptions PASSED

kafka.api.PlaintextConsumerTest > testPatternUnsubscription PASSED

kafka.api.PlaintextConsumerTest > testGroupConsumption PASSED

kafka.api.PlaintextConsumerTest > testPartitionsFor PASSED

kafka.api.PlaintextConsumerTest > testMultiConsumerRoundRobinAssignment PASSED

kafka.api.PlaintextConsumerTest > testPartitionPauseAndResume PASSED

kafka.api.PlaintextConsumerTest > testAutoCommitOnCloseAfterWakeup PASSED

kafka.api.PlaintextConsumerTest > testAutoOffsetReset PASSED

kafka.api.PlaintextConsumerTest > testFetchInvalidOffset PASSED

kafka.api.PlaintextConsumerTest > testCommitMetadata PASSED

kafka.api.PlaintextConsumerTest > testRoundRobinAssignment PASSED

kafka.api.PlaintextConsumerTest > testPatternSubscription PASSED

kafka.api.PlaintextConsumerTest > testPauseStateNotPreservedByRebalance PASSED

kafka.api.PlaintextConsumerTest > testUnsubscribeTopic PASSED

kafka.api.PlaintextConsumerTest > testListTopics PASSED

kafka.api.PlaintextConsumerTest > testAutoCommitOnRebalance PASSED

kafka.api.PlaintextConsumerTest > testSimpleConsumption PASSED

kafka.api.PlaintextConsumerTest > testPartitionReassignmentCallback PASSED

kafka.api.PlaintextConsumerTest > testCommitSpecifiedOffsets PASSED

kafka.api.ProducerBounceTest > testBrokerFailure PASSED

kafka.api.ProducerFailureHandlingTest > testCannotSendToInternalTopic PASSED

kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckOne PASSED

kafka.api.ProducerFailureHandlingTest > testWrongBrokerList PASSED

kafka.api.ProducerFailureHandlingTest > testNotEnoughReplicas PASSED

kafka.api.ProducerFailureHandlingTest > testNonExistentTopic PASSED

kafka.api.ProducerFailureHandlingTest > testInvalidPartition PASSED

kafka.api.ProducerFailureHandlingTest > testSendAfterClosed PASSED

kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckZero PASSED

kafka.api.ProducerFailureHandlingTest > 
testNotEnoughReplicasAfterBrokerShutdown PASSED

kafka.api.SaslPlaintextConsumerTest > testPauseStateNotPreservedByRebalance 
PASSED

kafka.api.SaslPlaintextConsumerTest > testUnsubscribeTopic PASSED

kafka.api.SaslPlaintextConsumerTest > testListTopics PASSED

kafka.api.SaslPlaintextConsumerTest > testAutoCommitOnRebalance PASSED

kafka.api.SaslPlaintextConsumerTest > testSimpleConsumption PASSED

kafka.api.SaslPlaintextConsumerTest > testPartitionReassignmentCallback PASSED

kafka.api.SaslPlaintextConsumerTest > testCommitSpecifiedOffsets PASSED

kafka.api.SslConsumerTest > testPauseStateNotPreservedByRebalance PASSED

kafka.api.SslConsumerTest > testUnsubscribeTopic PASSED

kafka.api.SslConsumerTest > testListTopics PASSED

kafka.api.SslConsumerTest > testAutoCommitOnRebalance PASSED

kafka.api.SslConsumerTest > testSimpleConsumption PASSED

kafka.api.SslConsumerTest > testPartitionReassignmentCallback PASSED

kafka.api.SslConsumerTest > testCommitSpecifiedOffsets PASSED

kafka.api.ConsumerBounceTest > testSeekAndCommitWithBrokerFailures PASSED

kafka.api.ConsumerBounceTest > testConsumptionWithBrokerFailures PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicWrite PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionNeededToReadFromNonExistentTopic PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoAccess PASSED

kafka.api.AuthorizerIntegrationTest > testListOfsetsWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicRead PASSED

kafka.api.AuthorizerIntegrationTest > testListOffsetsWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionNeededForWritingToNonExistentTopic PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithNoGroupAccess PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithNoTopicAccess PASSED


[GitHub] kafka pull request: Trunk kafka 2839

2015-12-09 Thread ZoneMayor
Github user ZoneMayor closed the pull request at:

https://github.com/apache/kafka/pull/602


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Comment Edited] (KAFKA-2875) Class path contains multiple SLF4J bindings warnings when using scripts under bin

2015-12-09 Thread jin xing (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15047978#comment-15047978
 ] 

jin xing edited comment on KAFKA-2875 at 12/9/15 10:32 AM:
---

[~ijuma]
hi Ismael, is there any update or feed back about this patch?


was (Author: jinxing6...@126.com):
[~ijuma]
hi Ismael, is there any update or feed back about this review?

> Class path contains multiple SLF4J bindings warnings when using scripts under 
> bin
> -
>
> Key: KAFKA-2875
> URL: https://issues.apache.org/jira/browse/KAFKA-2875
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>Assignee: jin xing
>Priority: Minor
>
> This adds a lot of noise when running the scripts, see example when running 
> kafka-console-producer.sh:
> {code}
> ~/D/s/kafka-0.9.0.0-src ❯❯❯ ./bin/kafka-console-producer.sh --topic topic 
> --broker-list localhost:9092 ⏎
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/core/build/dependant-libs-2.10.5/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/core/build/dependant-libs-2.10.5/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/tools/build/dependant-libs-2.10.5/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/api/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/runtime/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/file/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/json/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2875) Class path contains multiple SLF4J bindings warnings when using scripts under bin

2015-12-09 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15048454#comment-15048454
 ] 

Ismael Juma commented on KAFKA-2875:


Sorry for the delay. I'll take a look today.

> Class path contains multiple SLF4J bindings warnings when using scripts under 
> bin
> -
>
> Key: KAFKA-2875
> URL: https://issues.apache.org/jira/browse/KAFKA-2875
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>Assignee: jin xing
>Priority: Minor
>
> This adds a lot of noise when running the scripts, see example when running 
> kafka-console-producer.sh:
> {code}
> ~/D/s/kafka-0.9.0.0-src ❯❯❯ ./bin/kafka-console-producer.sh --topic topic 
> --broker-list localhost:9092 ⏎
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/core/build/dependant-libs-2.10.5/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/core/build/dependant-libs-2.10.5/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/tools/build/dependant-libs-2.10.5/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/api/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/runtime/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/file/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/json/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #216

2015-12-09 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-2 (docker Ubuntu ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 991aad23baa2f55d405d374b0a01785acdc63974 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 991aad23baa2f55d405d374b0a01785acdc63974
 > git rev-list 991aad23baa2f55d405d374b0a01785acdc63974 # timeout=10
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson1396873671346001689.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 16.613 secs
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson684939694480473527.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.9/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk8:clients:compileJava
:jar_core_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '/home/jenkins/.gradle/caches/modules-2/files-2.1/net.jpountz.lz4/lz4/1.3/792d5e592f6f3f0c1a3337cd0ac84309b544f8f4/lz4-1.3.jar'
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 14.259 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2


[jira] [Commented] (KAFKA-2970) Both UpdateMetadataRequest.java and LeaderAndIsrRequest.java have an Endpoint class

2015-12-09 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15048498#comment-15048498
 ] 

Ismael Juma commented on KAFKA-2970:


This was intentional. Each Request and Response type defines their own types so 
that they can evolve separately. There are pros and cons for each approach, but 
just wanted to raise this before we change it.

> Both UpdateMetadataRequest.java and LeaderAndIsrRequest.java have an Endpoint 
> class
> ---
>
> Key: KAFKA-2970
> URL: https://issues.apache.org/jira/browse/KAFKA-2970
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>
> Both UpdateMetadataRequest.java and LeaderAndIsrRequest.java have an Endpoint 
> class which contain the same information. These should be consolidated for 
> simplicity and inter-opt. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2971) KAFKA - Not obeying log4j settings, DailyRollingFileAppender not rolling files

2015-12-09 Thread Damir Ban (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Damir Ban updated KAFKA-2971:
-
Attachment: log4j.properties

> KAFKA - Not obeying log4j settings, DailyRollingFileAppender not rolling files
> --
>
> Key: KAFKA-2971
> URL: https://issues.apache.org/jira/browse/KAFKA-2971
> Project: Kafka
>  Issue Type: Bug
>  Components: config, log
>Affects Versions: 0.8.2.2
> Environment: OS: Windows Server 2008 R2 Enterprise SP1
> log4j: 1.2.16
>Reporter: Damir Ban
>Assignee: Jay Kreps
> Fix For: 0.8.2.2
>
> Attachments: log4j.properties
>
>
> Per the settings in log4j it is expected that log files get rolled over 
> periodically, but they are just getting filled until restart of service when 
> they are overwritten.
> As we have intermittent fatal failures customer restarts the service and we 
> loose the information about the failure.
> We have tried different date paterns in the DailyRollingFileAppender, but no 
> change.
> Attaching the log4j.properties



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2971) KAFKA - Not obeying log4j settings, DailyRollingFileAppender not rolling files

2015-12-09 Thread Damir Ban (JIRA)
Damir Ban created KAFKA-2971:


 Summary: KAFKA - Not obeying log4j settings, 
DailyRollingFileAppender not rolling files
 Key: KAFKA-2971
 URL: https://issues.apache.org/jira/browse/KAFKA-2971
 Project: Kafka
  Issue Type: Bug
  Components: config, log
Affects Versions: 0.8.2.2
 Environment: OS: Windows Server 2008 R2 Enterprise SP1
log4j: 1.2.16
Reporter: Damir Ban
Assignee: Jay Kreps
 Fix For: 0.8.2.2


Per the settings in log4j it is expected that log files get rolled over 
periodically, but they are just getting filled until restart of service when 
they are overwritten.

As we have intermittent fatal failures customer restarts the service and we 
loose the information about the failure.

We have tried different date paterns in the DailyRollingFileAppender, but no 
change.

Attaching the log4j.properties



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2921) Plug-able implementations support

2015-12-09 Thread Andrii Biletskyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrii Biletskyi updated KAFKA-2921:

Attachment: KIP-30-LE-WIP.patch

> Plug-able implementations support
> -
>
> Key: KAFKA-2921
> URL: https://issues.apache.org/jira/browse/KAFKA-2921
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Reporter: Andrii Biletskyi
> Attachments: KIP-30-LE-WIP.patch
>
>
> Add infrastructure to support plug-able implementations in runtime.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2837) FAILING TEST: kafka.api.ProducerBounceTest > testBrokerFailure

2015-12-09 Thread jin xing (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jin xing updated KAFKA-2837:

 Assignee: jin xing
 Reviewer: Guozhang Wang
Fix Version/s: 0.9.1.0
   0.9.0.1
Affects Version/s: 0.9.0.0
   Status: Patch Available  (was: Open)

> FAILING TEST: kafka.api.ProducerBounceTest > testBrokerFailure 
> ---
>
> Key: KAFKA-2837
> URL: https://issues.apache.org/jira/browse/KAFKA-2837
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.9.0.0
>Reporter: Gwen Shapira
>Assignee: jin xing
>  Labels: newbie
> Fix For: 0.9.0.1, 0.9.1.0
>
>
> {code}
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> kafka.api.ProducerBounceTest.testBrokerFailure(ProducerBounceTest.scala:117)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:105)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:56)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:64)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:50)
>   at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:106)
>   at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:360)
>   at 
> org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
>   at 
> org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(StoppableExecutorImpl.java:40)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> 

[jira] [Commented] (KAFKA-2837) FAILING TEST: kafka.api.ProducerBounceTest > testBrokerFailure

2015-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15048704#comment-15048704
 ] 

ASF GitHub Bot commented on KAFKA-2837:
---

GitHub user ZoneMayor opened a pull request:

https://github.com/apache/kafka/pull/648

KAFKA-2837: fix transient failure of kafka.api.ProducerBounceTest > 
testBrokerFailure

I can reproduced this transient failure, it seldom happen;
code is like below:
 // rolling bounce brokers
for (i <- 0 until numServers) {
  for (server <- servers) {
server.shutdown()
server.awaitShutdown()
server.startup()
Thread.sleep(2000)
  }

  // Make sure the producer do not see any exception
  // in returned metadata due to broker failures
  assertTrue(scheduler.failed == false)

  // Make sure the leader still exists after bouncing brokers
  (0 until numPartitions).foreach(partition => 
TestUtils.waitUntilLeaderIsElectedOrChanged(zkUtils, topic1, partition))
Brokers keep rolling restart, and producer keep sending messages;
In every loop, it will wait for election of partition leader;
But if the election is slow, more messages will be buffered in 
RecordAccumulator's BufferPool;
The limit for buffer is set to be 3;
TimeoutException("Failed to allocate memory within the configured max 
blocking time") will show up when out of memory;
Since for every restart of the broker, it will sleep for 2000 ms,  so this 
transient failure seldom happen;
But if I reduce the sleeping period, the bigger chance failure happens; 
for example if the broker with role of controller suffered a restart, it 
will take time to select controller first, then select leader, which will lead 
to more messges blocked in KafkaProducer:RecordAccumulator:BufferPool;
In this fix, I just enlarge the producer's buffer size to be 1MB;
@guozhangwang , Could you give some comments?

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ZoneMayor/kafka trunk-KAFKA-2837

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/648.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #648


commit 95374147a28208d4850f6e73f714bf418935fc2d
Author: ZoneMayor 
Date:   2015-11-27T03:49:34Z

Merge pull request #1 from apache/trunk

merge

commit cec5b48b651a7efd3900cfa3c1fd0ab1eeeaa3ec
Author: ZoneMayor 
Date:   2015-12-01T10:44:02Z

Merge pull request #2 from apache/trunk

2015-12-1

commit a119d547bf1741625ce0627073c7909992a20f15
Author: ZoneMayor 
Date:   2015-12-04T13:42:27Z

Merge pull request #3 from apache/trunk

2015-12-04#KAFKA-2893

commit b767a8dff85fc71c75d4cf5178c3f6f03ff81bfc
Author: ZoneMayor 
Date:   2015-12-09T10:42:30Z

Merge pull request #5 from apache/trunk

2015-12-9

commit cd5e6f4700a4387f9383b84aca0ee9c4639b1033
Author: jinxing 
Date:   2015-12-09T13:49:07Z

KAFKA-2837: fix transient failure kafka.api.ProducerBounceTest > 
testBrokerFailure




> FAILING TEST: kafka.api.ProducerBounceTest > testBrokerFailure 
> ---
>
> Key: KAFKA-2837
> URL: https://issues.apache.org/jira/browse/KAFKA-2837
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Gwen Shapira
>  Labels: newbie
>
> {code}
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> kafka.api.ProducerBounceTest.testBrokerFailure(ProducerBounceTest.scala:117)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 

[GitHub] kafka pull request: KAFKA-2837: fix transient failure of kafka.api...

2015-12-09 Thread ZoneMayor
GitHub user ZoneMayor opened a pull request:

https://github.com/apache/kafka/pull/648

KAFKA-2837: fix transient failure of kafka.api.ProducerBounceTest > 
testBrokerFailure

I can reproduced this transient failure, it seldom happen;
code is like below:
 // rolling bounce brokers
for (i <- 0 until numServers) {
  for (server <- servers) {
server.shutdown()
server.awaitShutdown()
server.startup()
Thread.sleep(2000)
  }

  // Make sure the producer do not see any exception
  // in returned metadata due to broker failures
  assertTrue(scheduler.failed == false)

  // Make sure the leader still exists after bouncing brokers
  (0 until numPartitions).foreach(partition => 
TestUtils.waitUntilLeaderIsElectedOrChanged(zkUtils, topic1, partition))
Brokers keep rolling restart, and producer keep sending messages;
In every loop, it will wait for election of partition leader;
But if the election is slow, more messages will be buffered in 
RecordAccumulator's BufferPool;
The limit for buffer is set to be 3;
TimeoutException("Failed to allocate memory within the configured max 
blocking time") will show up when out of memory;
Since for every restart of the broker, it will sleep for 2000 ms,  so this 
transient failure seldom happen;
But if I reduce the sleeping period, the bigger chance failure happens; 
for example if the broker with role of controller suffered a restart, it 
will take time to select controller first, then select leader, which will lead 
to more messges blocked in KafkaProducer:RecordAccumulator:BufferPool;
In this fix, I just enlarge the producer's buffer size to be 1MB;
@guozhangwang , Could you give some comments?

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ZoneMayor/kafka trunk-KAFKA-2837

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/648.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #648


commit 95374147a28208d4850f6e73f714bf418935fc2d
Author: ZoneMayor 
Date:   2015-11-27T03:49:34Z

Merge pull request #1 from apache/trunk

merge

commit cec5b48b651a7efd3900cfa3c1fd0ab1eeeaa3ec
Author: ZoneMayor 
Date:   2015-12-01T10:44:02Z

Merge pull request #2 from apache/trunk

2015-12-1

commit a119d547bf1741625ce0627073c7909992a20f15
Author: ZoneMayor 
Date:   2015-12-04T13:42:27Z

Merge pull request #3 from apache/trunk

2015-12-04#KAFKA-2893

commit b767a8dff85fc71c75d4cf5178c3f6f03ff81bfc
Author: ZoneMayor 
Date:   2015-12-09T10:42:30Z

Merge pull request #5 from apache/trunk

2015-12-9

commit cd5e6f4700a4387f9383b84aca0ee9c4639b1033
Author: jinxing 
Date:   2015-12-09T13:49:07Z

KAFKA-2837: fix transient failure kafka.api.ProducerBounceTest > 
testBrokerFailure




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2837) FAILING TEST: kafka.api.ProducerBounceTest > testBrokerFailure

2015-12-09 Thread jin xing (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15048709#comment-15048709
 ] 

jin xing commented on KAFKA-2837:
-

[~guozhang]
Hi Guozhang,  could you give some comments?

> FAILING TEST: kafka.api.ProducerBounceTest > testBrokerFailure 
> ---
>
> Key: KAFKA-2837
> URL: https://issues.apache.org/jira/browse/KAFKA-2837
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Gwen Shapira
>  Labels: newbie
>
> {code}
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> kafka.api.ProducerBounceTest.testBrokerFailure(ProducerBounceTest.scala:117)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:105)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:56)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:64)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:50)
>   at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:106)
>   at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:360)
>   at 
> org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
>   at 
> org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(StoppableExecutorImpl.java:40)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:744)
> {code}
> https://builds.apache.org/job/kafka-trunk-jdk7/815/console



--
This message was sent by 

[jira] [Commented] (KAFKA-2903) FileMessageSet's read method maybe has problem when start is not zero

2015-12-09 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15048946#comment-15048946
 ] 

Jun Rao commented on KAFKA-2903:


Yes, I think that will be fine too.

> FileMessageSet's read method maybe has problem when start is not zero
> -
>
> Key: KAFKA-2903
> URL: https://issues.apache.org/jira/browse/KAFKA-2903
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.8.2.1, 0.9.0.0
>Reporter: Pengwei
>Assignee: Jay Kreps
> Fix For: 0.9.1.0
>
>
> now the code is :
> def read(position: Int, size: Int): FileMessageSet = {
>. 
> new FileMessageSet(file,
>channel,
>start = this.start + position,
>end = math.min(this.start + position + size, 
> sizeInBytes()))
>   }
> if this.start is not 0, the end is only the FileMessageSet's size, not the 
> actually position of end position.
> the end parameter should be:
>  end = math.min(this.start + position + size, this.start+sizeInBytes())



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2972) ControlledShutdownResponse always deserialises `partitionsRemaining` as empty

2015-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15048976#comment-15048976
 ] 

ASF GitHub Bot commented on KAFKA-2972:
---

GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/649

KAFKA-2972; Add missing `partitionsRemaingList.add` in 
`ControlledShutdownResponse` constructor



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
KAFKA-2972-controlled-shutdown-response-bug

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/649.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #649


commit 82eb116122637e05221a8afbceae12d97cc1463d
Author: Ismael Juma 
Date:   2015-12-09T16:57:56Z

Add missing `partitionsRemaingList.add` in `ControlledShutdownResponse` 
constructor




> ControlledShutdownResponse always deserialises `partitionsRemaining` as empty
> -
>
> Key: KAFKA-2972
> URL: https://issues.apache.org/jira/browse/KAFKA-2972
> Project: Kafka
>  Issue Type: Bug
>  Components: network
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.9.0.1
>
>
> This was a regression introduced when moving to Java request/response classes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2973; Fix issue where `childrenSensors` ...

2015-12-09 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/650

KAFKA-2973; Fix issue where `childrenSensors` is incorrectly updated



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
kafka-2973-fix-leak-child-sensors-on-remove

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/650.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #650


commit ef6a543edd4c14e44b8dd660b936a7efa8aeaee0
Author: Ismael Juma 
Date:   2015-12-09T16:39:49Z

Fix issue where `childrenSensors` was incorrectly updated




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: [KAFKA-2965]Two variables should be exchanged.

2015-12-09 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/646


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2965) Two variables should be exchanged.

2015-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049090#comment-15049090
 ] 

ASF GitHub Bot commented on KAFKA-2965:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/646


> Two variables should be exchanged.
> --
>
> Key: KAFKA-2965
> URL: https://issues.apache.org/jira/browse/KAFKA-2965
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.9.0.0
> Environment: NA
>Reporter: Bo Wang
>Priority: Minor
>  Labels: bug
> Fix For: 0.9.1.0
>
> Attachments: Kafka-2965.patch
>
>
> Two variables should be exchanged in KafkaController.scala as follows:
> val topicsForWhichPartitionReassignmentIsInProgress = 
> controllerContext.partitionsUndergoingPreferredReplicaElection.map(_.topic)
> val topicsForWhichPreferredReplicaElectionIsInProgress = 
> controllerContext.partitionsBeingReassigned.keySet.map(_.topic)
> val topicsIneligibleForDeletion = topicsWithReplicasOnDeadBrokers | 
> topicsForWhichPartitionReassignmentIsInProgress |
>   
> topicsForWhichPreferredReplicaElectionIsInProgress
> Should change to:
> val topicsForWhichPreferredReplicaElectionIsInProgress = 
> controllerContext.partitionsUndergoingPreferredReplicaElection.map(_.topic)
> val topicsForWhichPartitionReassignmentIsInProgress = 
> controllerContext.partitionsBeingReassigned.keySet.map(_.topic)
> val topicsIneligibleForDeletion = topicsWithReplicasOnDeadBrokers | 
> topicsForWhichPartitionReassignmentIsInProgress |
>   
> topicsForWhichPreferredReplicaElectionIsInProgress



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2965) Two variables should be exchanged.

2015-12-09 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2965:

   Resolution: Fixed
Fix Version/s: 0.9.1.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 646
[https://github.com/apache/kafka/pull/646]

> Two variables should be exchanged.
> --
>
> Key: KAFKA-2965
> URL: https://issues.apache.org/jira/browse/KAFKA-2965
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.9.0.0
> Environment: NA
>Reporter: Bo Wang
>Priority: Minor
>  Labels: bug
> Fix For: 0.9.1.0
>
> Attachments: Kafka-2965.patch
>
>
> Two variables should be exchanged in KafkaController.scala as follows:
> val topicsForWhichPartitionReassignmentIsInProgress = 
> controllerContext.partitionsUndergoingPreferredReplicaElection.map(_.topic)
> val topicsForWhichPreferredReplicaElectionIsInProgress = 
> controllerContext.partitionsBeingReassigned.keySet.map(_.topic)
> val topicsIneligibleForDeletion = topicsWithReplicasOnDeadBrokers | 
> topicsForWhichPartitionReassignmentIsInProgress |
>   
> topicsForWhichPreferredReplicaElectionIsInProgress
> Should change to:
> val topicsForWhichPreferredReplicaElectionIsInProgress = 
> controllerContext.partitionsUndergoingPreferredReplicaElection.map(_.topic)
> val topicsForWhichPartitionReassignmentIsInProgress = 
> controllerContext.partitionsBeingReassigned.keySet.map(_.topic)
> val topicsIneligibleForDeletion = topicsWithReplicasOnDeadBrokers | 
> topicsForWhichPartitionReassignmentIsInProgress |
>   
> topicsForWhichPreferredReplicaElectionIsInProgress



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2967) Move Kafka documentation to ReStructuredText

2015-12-09 Thread Jay Kreps (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049105#comment-15049105
 ] 

Jay Kreps commented on KAFKA-2967:
--

+0

We've been through a few rounds of this and doc changes tend to be a bit like 
build system changes. Lots of people get excited about changing the framework, 
no one gets excited about the thing the framework is used for better. We 
usually end up moving over to something no one understands and then it is 
poorly maintained (we are on build system iteration 5 by my count).

I'm in favor of removing the dependency on apache SSI, I think that would be a 
bit improvement.

An easier approach might be to use something like Jykell that would allow us to 
take what we have as is, and move to markdown bit-by-bit as a convenience; it 
also allows you to fall back to html wherever needed and makes live preview 
fairly easy to set up.

I feel previous attempts in this area have optimized for the wrong things. In 
general we write the docs once, the hard part tends to be the good english 
explanations written by people with deep understand for people with no 
understanding.

The big problem with the existing docs is that many areas are poorly covered or 
confusing or out of date. Personally, I think this is secondary to the 
formatting engine.

I agree HTML formatting is verbose, but:
- Everyone in the world knows it.
- If you don't it is easy to google
- It is incredibly flexible

Various things that translate to HTML fix the verbosity but often are very 
limited in what they can do and require you to learn a whole tool chain and 
markup language.

A few things I think that are really important:
- The docs should stay part of the main site. Doc things like sphynx that dump 
you into a whole different site with different nav and theming is just a 
terrible experience.
- The output should look no worse than it currently does. That restructured 
text page looks like it was imported from 1997. Hopefully that is not 
indicitive?

Can we see what the proposed output would look like before we adopt any change. 
That is the actually important thing.

> Move Kafka documentation to ReStructuredText
> 
>
> Key: KAFKA-2967
> URL: https://issues.apache.org/jira/browse/KAFKA-2967
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>
> Storing documentation as HTML is kind of BS :)
> * Formatting is a pain, and making it look good is even worse
> * Its just HTML, can't generate PDFs
> * Reading and editting is painful
> * Validating changes is hard because our formatting relies on all kinds of 
> Apache Server features.
> I suggest:
> * Move to RST
> * Generate HTML and PDF during build using Sphinx plugin for Gradle.
> Lots of Apache projects are doing this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: Minor: Fix @link in MetricName comment

2015-12-09 Thread lindong28
GitHub user lindong28 opened a pull request:

https://github.com/apache/kafka/pull/651

Minor: Fix @link in MetricName comment



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/lindong28/kafka minor-fix-link-comment

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/651.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #651


commit aebb8c587fb437b20754078b8e1f5ba1bcc6f2d3
Author: Dong Lin 
Date:   2015-12-09T18:01:36Z

Minor: Fix @link in MetricName comment




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2960) DelayedProduce may cause message lose during repeatly leader change

2015-12-09 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049028#comment-15049028
 ] 

Jiangjie Qin commented on KAFKA-2960:
-

[~guozhang] Got it. Thanks for the explanation.

> DelayedProduce may cause message lose during repeatly leader change
> ---
>
> Key: KAFKA-2960
> URL: https://issues.apache.org/jira/browse/KAFKA-2960
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Xing Huang
> Fix For: 0.9.1.0
>
>
> related to #KAFKA-1148
> When a leader replica became follower then leader again, it may truncated its 
> log as follower. But the second time it became leader, its ISR may shrink and 
> if at this moment new messages were appended, the DelayedProduce generated 
> when it was leader the first time may be satisfied, and the client will 
> receive a response with no error. But, actually the messages were lost. 
> We simulated this scene, which proved the message lose could happen. And it 
> seems to be the reason for a data lose recently happened to us according to 
> broker logs and client logs.
> I think we should check the leader epoch when send a response, or satisfy 
> DelayedProduce when leader change as described in #KAFKA-1148.
> And we may need an new error code to inform the producer about this error. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2973; Fix leak of child sensors on remov...

2015-12-09 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/650


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-2973) Fix leak of child sensors on remove

2015-12-09 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2973:
---
Reviewer: Gwen Shapira  (was: Ewen Cheslack-Postava)

> Fix leak of child sensors on remove
> ---
>
> Key: KAFKA-2973
> URL: https://issues.apache.org/jira/browse/KAFKA-2973
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.9.0.1, 0.9.1.0
>
>
> We added the ability to remove sensors from Kafka Metrics in 0.9.0.0. There 
> is, however, a bug in how we populate the `childrenSensors` map causing us to 
> leak some child sensors (all, but the last one added).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-2967) Move Kafka documentation to ReStructuredText

2015-12-09 Thread Jay Kreps (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049105#comment-15049105
 ] 

Jay Kreps edited comment on KAFKA-2967 at 12/9/15 6:14 PM:
---

+0

We've been through a few rounds of this and doc changes tend to be a bit like 
build system changes. Lots of people get excited about changing the framework, 
no one gets excited about using the framework to improve the thing the 
framework is used for. We usually end up moving over to something no one 
understands and then it is poorly maintained (we are on build system iteration 
5 by my count).

I'm in favor of removing the dependency on apache SSI, I think that would be a 
big improvement.

An easier approach might be to use something like Jykell that would allow us to 
take what we have as is, and move to markdown bit-by-bit as a convenience; it 
also allows you to fall back to html wherever needed and makes live preview 
fairly easy to set up.

I feel previous attempts in this area have optimized for the wrong things. In 
general we write the docs once, the hard part tends to be the good english 
explanations written by people with deep understanding written for people with 
no understanding.

The big problem with the existing docs is that many areas are poorly covered or 
confusing or out of date. Personally, I think this is secondary to the 
formatting engine. I think improving the formatting engine could make things as 
much as 10% better but it could also make the resulting docs a lot worse if it 
changes how they're integrated into the site.

I agree HTML formatting is verbose, but:
- Everyone in the world knows it.
- If you don't it is easy to google
- It is incredibly flexible

Various things that translate to HTML fix the verbosity but often are very 
limited in what they can do and require you to learn a whole tool chain and 
markup language.

A few things I think that are really important:
- The docs should stay part of the main site in styling, nav, etc. Doc things 
like sphynx that dump you into a whole different site with different nav and 
theming is just a terrible experience.
- The output should look no worse than it currently does. That restructured 
text page looks like it was imported from 1997. Hopefully that is not 
indicative?

Can we see what the proposed output would look like and how it would integrate 
before we adopt any change. That is the actually important thing.


was (Author: jkreps):
+0

We've been through a few rounds of this and doc changes tend to be a bit like 
build system changes. Lots of people get excited about changing the framework, 
no one gets excited about the thing the framework is used for better. We 
usually end up moving over to something no one understands and then it is 
poorly maintained (we are on build system iteration 5 by my count).

I'm in favor of removing the dependency on apache SSI, I think that would be a 
bit improvement.

An easier approach might be to use something like Jykell that would allow us to 
take what we have as is, and move to markdown bit-by-bit as a convenience; it 
also allows you to fall back to html wherever needed and makes live preview 
fairly easy to set up.

I feel previous attempts in this area have optimized for the wrong things. In 
general we write the docs once, the hard part tends to be the good english 
explanations written by people with deep understand for people with no 
understanding.

The big problem with the existing docs is that many areas are poorly covered or 
confusing or out of date. Personally, I think this is secondary to the 
formatting engine.

I agree HTML formatting is verbose, but:
- Everyone in the world knows it.
- If you don't it is easy to google
- It is incredibly flexible

Various things that translate to HTML fix the verbosity but often are very 
limited in what they can do and require you to learn a whole tool chain and 
markup language.

A few things I think that are really important:
- The docs should stay part of the main site. Doc things like sphynx that dump 
you into a whole different site with different nav and theming is just a 
terrible experience.
- The output should look no worse than it currently does. That restructured 
text page looks like it was imported from 1997. Hopefully that is not 
indicitive?

Can we see what the proposed output would look like before we adopt any change. 
That is the actually important thing.

> Move Kafka documentation to ReStructuredText
> 
>
> Key: KAFKA-2967
> URL: https://issues.apache.org/jira/browse/KAFKA-2967
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>
> Storing documentation as HTML is kind of BS :)
> * Formatting is a pain, and making it look good is even worse
> * Its just HTML, can't generate PDFs
> * Reading and editting is painful

[GitHub] kafka pull request: MINOR: Use equals instead of ==

2015-12-09 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/652

MINOR: Use equals instead of ==

A few issues found via static analysis.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka use-equals-instead-of-==

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/652.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #652


commit 0175171caf8e974d49a262b6de510b467655889d
Author: Edward Ribeiro 
Date:   2015-09-01T03:24:04Z

MockClient's disconnect() method has two bugs

* First, it compares Strings using `==` instead of `equals()`.
* Second, it tries to remove a String from a Set.

commit 202093842348fc88fd4e13e6c2867c60b732c453
Author: Ismael Juma 
Date:   2015-12-09T16:41:05Z

Use `equals` instead of `==`




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: Minor: Fix @link in MetricName comment

2015-12-09 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/651


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk7 #889

2015-12-09 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2972; Add missing `partitionsRemaingList.add` in

--
[...truncated 2793 lines...]
kafka.coordinator.MemberMetadataTest > testVoteForPreferredProtocol PASSED

kafka.coordinator.MemberMetadataTest > testVoteRaisesOnNoSupportedProtocols 
PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[0] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.log.LogManagerTest > testCleanupSegmentsToMaintainSize PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithRelativeDirectory 
PASSED

kafka.log.LogManagerTest > testGetNonExistentLog PASSED

kafka.log.LogManagerTest > testTwoLogManagersUsingSameDirFails PASSED

kafka.log.LogManagerTest > testLeastLoadedAssignment PASSED

kafka.log.LogManagerTest > testCleanupExpiredSegments PASSED

kafka.log.LogManagerTest > testCheckpointRecoveryPoints PASSED

kafka.log.LogManagerTest > testTimeBasedFlush PASSED

kafka.log.LogManagerTest > testCreateLog PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithTrailingSlash PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.FileMessageSetTest > testTruncate PASSED

kafka.log.FileMessageSetTest > testIterationOverPartialAndTruncation PASSED

kafka.log.FileMessageSetTest > testRead PASSED

kafka.log.FileMessageSetTest > testFileSize PASSED

kafka.log.FileMessageSetTest > testIteratorWithLimits PASSED

kafka.log.FileMessageSetTest > testPreallocateTrue PASSED

kafka.log.FileMessageSetTest > testIteratorIsConsistent PASSED

kafka.log.FileMessageSetTest > testIterationDoesntChangePosition PASSED

kafka.log.FileMessageSetTest > testWrittenEqualsRead PASSED

kafka.log.FileMessageSetTest > testWriteTo PASSED

kafka.log.FileMessageSetTest > testPreallocateFalse PASSED

kafka.log.FileMessageSetTest > testPreallocateClearShutdown PASSED

kafka.log.FileMessageSetTest > testSearch PASSED

kafka.log.FileMessageSetTest > testSizeInBytes PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingTopic PASSED

kafka.log.LogTest > testIndexRebuild PASSED

kafka.log.LogTest > testLogRolls PASSED

kafka.log.LogTest > testMessageSizeCheck PASSED

kafka.log.LogTest > testAsyncDelete PASSED

kafka.log.LogTest > testReadOutOfRange PASSED

kafka.log.LogTest > testReadAtLogGap PASSED

kafka.log.LogTest > testTimeBasedLogRoll PASSED

kafka.log.LogTest > testLoadEmptyLog PASSED

kafka.log.LogTest > testMessageSetSizeCheck PASSED

kafka.log.LogTest > testIndexResizingAtTruncation PASSED

kafka.log.LogTest > 

[jira] [Commented] (KAFKA-2966) 0.9.0 docs missing upgrade notes regarding replica lag

2015-12-09 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049357#comment-15049357
 ] 

Aditya Auradkar commented on KAFKA-2966:


I'll work on it since I made those changes.

> 0.9.0 docs missing upgrade notes regarding replica lag
> --
>
> Key: KAFKA-2966
> URL: https://issues.apache.org/jira/browse/KAFKA-2966
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Aditya Auradkar
>
> We should document that:
> * replica.lag.max.messages is gone
> * replica.lag.time.max.ms has a new meaning
> In the upgrade section. People can get caught by surprise.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-32 - Add CreateTime and LogAppendTime to Kafka message

2015-12-09 Thread Jay Kreps
Hey Becket,

That summary of pros and cons sounds about right to me.

There are potentially two actions you could take when
max.message.time.difference is exceeded--override it or reject the
message entirely. Can we pick one of these or does the action need to
be configurable too? (I'm not sure). The downside of more
configuration is that it is more fiddly and has more modes.

I suppose the reason I was thinking of this as a "difference" rather
than a hard type was that if you were going to go the reject mode you
would need some tolerance setting (i.e. if your SLA is that if your
timestamp is off by more than 10 minutes I give you an error). I agree
with you that having one field that is potentially containing a mix of
two values is a bit weird.

-Jay

On Mon, Dec 7, 2015 at 5:17 PM, Becket Qin  wrote:
> It looks the format of the previous email was messed up. Send it again.
>
> Just to recap, the last proposal Jay made (with some implementation
> details added)
> was:
>
> 1. Allow user to stamp the message when produce
>
> 2. When broker receives a message it take a look at the difference between
> its local time and the timestamp in the message.
>   a. If the time difference is within a configurable
> max.message.time.difference.ms, the server will accept it and append it to
> the log.
>   b. If the time difference is beyond the configured
> max.message.time.difference.ms, the server will override the timestamp with
> its current local time and append the message to the log.
>   c. The default value of max.message.time.difference would be set to
> Long.MaxValue.
>
> 3. The configurable time difference threshold
> max.message.time.difference.ms will
> be a per topic configuration.
>
> 4. The indexed will be built so it has the following guarantee.
>   a. If user search by time stamp:
>   - all the messages after that timestamp will be consumed.
>   - user might see earlier messages.
>   b. The log retention will take a look at the last time index entry in the
> time index file. Because the last entry will be the latest timestamp in the
> entire log segment. If that entry expires, the log segment will be deleted.
>   c. The log rolling has to depend on the earliest timestamp. In this case
> we may need to keep a in memory timestamp only for the current active log.
> On recover, we will need to read the active log segment to get this timestamp
> of the earliest messages.
>
> 5. The downside of this proposal are:
>   a. The timestamp might not be monotonically increasing.
>   b. The log retention might become non-deterministic. i.e. When a message
> will be deleted now depends on the timestamp of the other messages in the
> same log segment. And those timestamps are provided by
> user within a range depending on what the time difference threshold
> configuration is.
>   c. The semantic meaning of the timestamp in the messages could be a little
> bit vague because some of them come from the producer and some of them are
> overwritten by brokers.
>
> 6. Although the proposal has some downsides, it gives user the flexibility
> to use the timestamp.
>   a. If the threshold is set to Long.MaxValue. The timestamp in the message is
> equivalent to CreateTime.
>   b. If the threshold is set to 0. The timestamp in the message is equivalent
> to LogAppendTime.
>
> This proposal actually allows user to use either CreateTime or LogAppendTime
> without introducing two timestamp concept at the same time. I have updated
> the wiki for KIP-32 and KIP-33 with this proposal.
>
> One thing I am thinking is that instead of having a time difference threshold,
> should we simply set have a TimestampType configuration? Because in most
> cases, people will either set the threshold to 0 or Long.MaxValue. Setting
> anything in between will make the timestamp in the message meaningless to
> user - user don't know if the timestamp has been overwritten by the brokers.
>
> Any thoughts?
>
> Thanks,
> Jiangjie (Becket) Qin
>
> On Mon, Dec 7, 2015 at 10:33 AM, Jiangjie Qin 
> wrote:
>
>> Bump up this thread.
>>
>> Just to recap, the last proposal Jay made (with some implementation details
>> added) was:
>>
>>1. Allow user to stamp the message when produce
>>2. When broker receives a message it take a look at the difference
>>between its local time and the timestamp in the message.
>>   - If the time difference is within a configurable
>>   max.message.time.difference.ms, the server will accept it and append
>>   it to the log.
>>   - If the time difference is beyond the configured
>>   max.message.time.difference.ms, the server will override the
>>   timestamp with its current local time and append the message to the
>> log.
>>   - The default value of max.message.time.difference would be set to
>>   Long.MaxValue.
>>   3. The configurable time difference threshold
>>max.message.time.difference.ms will be a per topic configuration.

[jira] [Created] (KAFKA-2974) `==` is used incorrectly in a few places in Java code

2015-12-09 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-2974:
--

 Summary: `==` is used incorrectly in a few places in Java code
 Key: KAFKA-2974
 URL: https://issues.apache.org/jira/browse/KAFKA-2974
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.9.0.0
Reporter: Ismael Juma
Assignee: Ismael Juma


Unlike Scala, `==` is reference equality in Java and one normally wants to use 
`equals`. We should fix the cases where `==` is used incorrectly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Setup for debugging Kafka in eclipse

2015-12-09 Thread Vahid S Hashemian
Hi all,

I'm new to both Kafka and Scala and would like to setup my eclipse IDE to 
run Kafka in debug mode so I can better trace the execution of various 
commands to better understand how they are implemented.
I set up my dev environment using the instructions at 
https://cwiki.apache.org/confluence/display/KAFKA/Eclipse-Scala-Gradle-Git+Developement+Environment+Setup
.
I now have several projects in my workspace (api, client, connect, core, 
...) and they seem to be loaded fine - I don't see any error reported by 
eclipse.

I appreciate any pointer that helps me set this up.

Thanks.
--Vahid




[jira] [Updated] (KAFKA-2972) ControlledShutdownResponse always serialises `partitionsRemaining` as empty

2015-12-09 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2972:
---
Reviewer: Guozhang Wang  (was: Ewen Cheslack-Postava)

> ControlledShutdownResponse always serialises `partitionsRemaining` as empty
> ---
>
> Key: KAFKA-2972
> URL: https://issues.apache.org/jira/browse/KAFKA-2972
> Project: Kafka
>  Issue Type: Bug
>  Components: network
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.9.0.1
>
>
> This only affects the Java response class which is not used for serialisation 
> in 0.9.0, but will be in 0.9.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2974) `==` is used incorrectly in a few places in Java code

2015-12-09 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049437#comment-15049437
 ] 

Ismael Juma commented on KAFKA-2974:


https://github.com/apache/kafka/pull/652

> `==` is used incorrectly in a few places in Java code
> -
>
> Key: KAFKA-2974
> URL: https://issues.apache.org/jira/browse/KAFKA-2974
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>
> Unlike Scala, `==` is reference equality in Java and one normally wants to 
> use `equals`. We should fix the cases where `==` is used incorrectly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2966) 0.9.0 docs missing upgrade notes regarding replica lag

2015-12-09 Thread Aditya Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Auradkar reassigned KAFKA-2966:
--

Assignee: Aditya Auradkar

> 0.9.0 docs missing upgrade notes regarding replica lag
> --
>
> Key: KAFKA-2966
> URL: https://issues.apache.org/jira/browse/KAFKA-2966
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Aditya Auradkar
>
> We should document that:
> * replica.lag.max.messages is gone
> * replica.lag.time.max.ms has a new meaning
> In the upgrade section. People can get caught by surprise.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2974) `==` is used incorrectly in a few places in Java code

2015-12-09 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2974:
---
Status: Patch Available  (was: Open)

> `==` is used incorrectly in a few places in Java code
> -
>
> Key: KAFKA-2974
> URL: https://issues.apache.org/jira/browse/KAFKA-2974
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>
> Unlike Scala, `==` is reference equality in Java and one normally wants to 
> use `equals`. We should fix the cases where `==` is used incorrectly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1894) Avoid long or infinite blocking in the consumer

2015-12-09 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049441#comment-15049441
 ] 

Jason Gustafson commented on KAFKA-1894:


There's been a ton of movement on the new consumer since this issue was first 
posted, so here's an update of the current blocking calls:

1. poll(timeout) blocks indefinitely for a) finding the coordinator, b) joining 
the group, and c) fetching/resetting offsets. The last of these may require an 
OffsetFetch to get the last committed position or a ListOffset to reset the 
position to the earliest or latest offset. Obviously we depend on the 
coordinator being available to join the group, but we also depend on partition 
leaders being available if we need to call ListOffset.
2. commitSync() blocks indefinitely until the commit succeeds. This may involve 
finding a new coordinator if the old one has failed.
3. position() blocks to set the position (if it needs to be set). This is 
similar to case c) in poll() above.
4. committed() blocks to fetch the last committed position if the consumer has 
no cached commit.
5. partitionsFor()/listTopics() blocks to send a TopicMetadataRequest to any of 
the brokers (if the request cannot be served from the cache).
6. close() blocks if auto-commit is enabled in a call to commitSync().

In all of these cases, we're fairly careful to propagate unrecoverable errors 
to the user. For example, commitSync() will not retry a commit if it receives 
an ILLEGAL_GENERATION since there is no way the commit can succeed after that 
error. However, there are still some situations where the blocking can be 
prolonged. In the most extreme case, if the consumer cannot connect to any of 
the brokers it knows about, it will retry indefinitely until it can. Other than 
that, the main cases that come to mind are blocking in ListOffsets when the 
partition leader is not available, and blocking in coordinator discovery when 
the coordinator cannot be found (e.g. if there is no leader for the 
corresponding partition of __consumer_offsets).

Going forward, it would be ideal to have poll() enforce the timeout parameter 
in any situation. This is complicated mainly by the fact that we may have to 
leave an active rebalance in progress, which will surely require additional 
state tracking. There are some subtle implications as well. For example, if we 
return to the user with a JoinGroup on the wire, it could actually return in a 
separate blocking call and have its handler callback invoked. We'd have to be 
careful that this doesn't cause any surprises for the user (e.g. partitions 
getting revoked while a call to position() is active). We also have limited 
options when it comes to handling the rebalance callback which could itself 
call another blocking method such as commitSync(). Since we have only one 
thread to work with, there doesn't seem like much we can do in this case.

The other blocking calls are more straightforward: we can just raise a 
TimeoutException after a configurable amount of time has passed. The producer 
has a setting "max.block.ms" which we could borrow for this purpose (guess we 
would need a KIP for this now). But similarly as in poll(), we'll have to be 
careful about any state we're leaving behind when the exceptions are thrown (in 
particular requests left on the wire).

An open question for the consumer is what its behavior should be if a partition 
leader cannot be found. Once the initial offset has been found, we generally 
handle leader failures gracefully by requesting metadata updates in the 
background and continuing to fetch from the other partitions. But if the leader 
failure occurs before we've fetched the initial offset, we will not send any 
fetches until we've found the new leader. This case is probably rare in 
practice, but it would seem more desirable (and more consistent) to let 
fetching continue on other partitions. This will require decoupling the offset 
state of individual partitions, which may be tricky.

> Avoid long or infinite blocking in the consumer
> ---
>
> Key: KAFKA-1894
> URL: https://issues.apache.org/jira/browse/KAFKA-1894
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Jay Kreps
>Assignee: Jason Gustafson
> Fix For: 0.10.0.0
>
>
> The new consumer has a lot of loops that look something like
> {code}
>   while(!isThingComplete())
> client.poll();
> {code}
> This occurs both in KafkaConsumer but also in NetworkClient.completeAll. 
> These retry loops are actually mostly the behavior we want but there are 
> several cases where they may cause problems:
>  - In the case of a hard failure we may hang for a long time or indefinitely 
> before realizing the connection is lost.
>  - In the case where the cluster is malfunctioning 

[jira] [Commented] (KAFKA-2967) Move Kafka documentation to ReStructuredText

2015-12-09 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049329#comment-15049329
 ] 

Gwen Shapira commented on KAFKA-2967:
-

RST is very commonly used in open source projects and is very well documented 
and easy to google. I don't think it will be an impediment for contributions 
(it isn't in Python, Sqoop, Flume, etc).  I'm not sure flexibility of 
formatting is an advantage in project documentation where standardization is 
often better.

Regarding the output:

Sphinx supports HTML themes, so we are really very flexible regarding the 
looks. We can use existing themes or make our own to better match the look of 
the Kafka site.  You can see how this works and few examples here: 
http://sphinx-doc.org/theming.html

Here are some documentation sites built with Sphinx and RST, so you can see 
some of the options:

http://docs.confluent.io/2.0.0/platform.html
http://sqoop.apache.org/docs/1.4.6/SqoopUserGuide.html
https://flume.apache.org/

The main point is that they all look different, and we can make sure the Kafka 
documentation will work for us.

By separating our content from the look and feel, we really do make it easier 
for people to contribute documentation. Just moving the docs to our github made 
a huge uptick in contributions. Automating doc generation was another step 
toward improving documentation (and made life easier for release managers).  
I'm trying for another incremental improvement here :)


> Move Kafka documentation to ReStructuredText
> 
>
> Key: KAFKA-2967
> URL: https://issues.apache.org/jira/browse/KAFKA-2967
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>
> Storing documentation as HTML is kind of BS :)
> * Formatting is a pain, and making it look good is even worse
> * Its just HTML, can't generate PDFs
> * Reading and editting is painful
> * Validating changes is hard because our formatting relies on all kinds of 
> Apache Server features.
> I suggest:
> * Move to RST
> * Generate HTML and PDF during build using Sphinx plugin for Gradle.
> Lots of Apache projects are doing this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: HOTFIX: fix table-table outer join and left jo...

2015-12-09 Thread ymatsuda
GitHub user ymatsuda opened a pull request:

https://github.com/apache/kafka/pull/653

HOTFIX: fix table-table outer join and left join. more tests

@guozhangwang 

* fixed bugs in table-table outer/left joins
* added more tests

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ymatsuda/kafka join_tests

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/653.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #653


commit bc8bbd3c3662d79e93331a109f85bd7a168f45ad
Author: Yasuhiro Matsuda 
Date:   2015-12-09T21:29:52Z

HOTFIX: fix table-table outer join and left join. more tests




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (KAFKA-2653) Stateful operations in the KStream DSL layer

2015-12-09 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang reassigned KAFKA-2653:


Assignee: Guozhang Wang

> Stateful operations in the KStream DSL layer
> 
>
> Key: KAFKA-2653
> URL: https://issues.apache.org/jira/browse/KAFKA-2653
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>
> This includes the interface design the implementation for stateful operations 
> including:
> 0. table representation in KStream.
> 1. stream-stream join.
> 2. stream-table join.
> 3. table-table join.
> 4. stream / table aggregations.
> With 0 and 3 being tackled in KAFKA-2856 and KAFKA-2962 separately, this 
> ticket is going to only focus on windowing definition and 1 / 2 / 4 above.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2896 Added system test for partition re-...

2015-12-09 Thread apovzner
GitHub user apovzner opened a pull request:

https://github.com/apache/kafka/pull/655

KAFKA-2896 Added system test for partition re-assignment

Partition re-assignment tests with and without broker failure.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apovzner/kafka kafka_2896

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/655.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #655


commit bddca8055a70ccc4385e7898fd6ff2eb38db
Author: Anna Povzner 
Date:   2015-12-10T01:06:11Z

KAFKA-2896 Added system test for partition re-assignment




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: MINOR: Trivial doc/ typo fixes.

2015-12-09 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/654


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-2926) [MirrorMaker] InternalRebalancer calls wrong method of external rebalancer

2015-12-09 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2926:

Reviewer: Ewen Cheslack-Postava

Since you owe me about 70k lines of code in reviews ;)

> [MirrorMaker] InternalRebalancer calls wrong method of external rebalancer
> --
>
> Key: KAFKA-2926
> URL: https://issues.apache.org/jira/browse/KAFKA-2926
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.9.0.0
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
>
> MirrorMaker has an internal rebalance listener that will invoke an external 
> (pluggable) listener if such exists. Looks like the internal listener calls 
> the wrong method of the external listener.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2928) system tests: failures in version-related sanity checks

2015-12-09 Thread Geoff Anderson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoff Anderson updated KAFKA-2928:
--
Reviewer: Gwen Shapira

> system tests: failures in version-related sanity checks
> ---
>
> Key: KAFKA-2928
> URL: https://issues.apache.org/jira/browse/KAFKA-2928
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
>
> There have been a few consecutive failures of version-related sanity checks 
> in nightly system test runs:
> kafkatest.sanity_checks.test_verifiable_producer
> kafkatest.sanity_checks.test_kafka_version
> assert is_version(...) is failing
> utils.util.is_version is a fairly rough heuristic, so most likely this needs 
> to be updated.
> E.g., see
> http://testing.confluent.io/kafka/2015-12-01--001/
> (if this is broken, use 
> http://testing.confluent.io/kafka/2015-12-01--001.tar.gz)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2928: system test: fix version sanity ch...

2015-12-09 Thread granders
GitHub user granders opened a pull request:

https://github.com/apache/kafka/pull/656

KAFKA-2928: system test: fix version sanity checks

Fixed version sanity checks by updated kafkatest version to match kafka 
version

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/confluentinc/kafka 
KAFKA-2928-fix-version-sanity-checks

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/656.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #656


commit 1196d5aefa32e338881e7b3b50682e082733c625
Author: Geoff Anderson 
Date:   2015-12-10T01:43:04Z

Fixed version sanity checks by updated kafkatest version to match kafka 
version




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2928) system tests: failures in version-related sanity checks

2015-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049839#comment-15049839
 ] 

ASF GitHub Bot commented on KAFKA-2928:
---

GitHub user granders opened a pull request:

https://github.com/apache/kafka/pull/656

KAFKA-2928: system test: fix version sanity checks

Fixed version sanity checks by updated kafkatest version to match kafka 
version

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/confluentinc/kafka 
KAFKA-2928-fix-version-sanity-checks

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/656.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #656


commit 1196d5aefa32e338881e7b3b50682e082733c625
Author: Geoff Anderson 
Date:   2015-12-10T01:43:04Z

Fixed version sanity checks by updated kafkatest version to match kafka 
version




> system tests: failures in version-related sanity checks
> ---
>
> Key: KAFKA-2928
> URL: https://issues.apache.org/jira/browse/KAFKA-2928
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
>
> There have been a few consecutive failures of version-related sanity checks 
> in nightly system test runs:
> kafkatest.sanity_checks.test_verifiable_producer
> kafkatest.sanity_checks.test_kafka_version
> assert is_version(...) is failing
> utils.util.is_version is a fairly rough heuristic, so most likely this needs 
> to be updated.
> E.g., see
> http://testing.confluent.io/kafka/2015-12-01--001/
> (if this is broken, use 
> http://testing.confluent.io/kafka/2015-12-01--001.tar.gz)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: Trivial doc/ typo fixes.

2015-12-09 Thread alexlod
GitHub user alexlod opened a pull request:

https://github.com/apache/kafka/pull/654

MINOR: Trivial doc/ typo fixes.

The change in `docs/design.html` is hard to catch in the diff -- a `tbe` is 
changed to `the`. All other changes show up clearly in the diff.

This contribution is my original work and I license the work to the project 
under the project's open source license.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/alexlod/kafka doc-typo-fixes

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/654.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #654


commit 4aa89cf53889ce63775516893204ee03b7444f45
Author: Alex Loddengaard 
Date:   2015-12-09T22:53:08Z

Trivial doc/ typo fixes.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2927) System tests: reduce storage footprint of collected logs

2015-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049845#comment-15049845
 ] 

ASF GitHub Bot commented on KAFKA-2927:
---

GitHub user granders opened a pull request:

https://github.com/apache/kafka/pull/657

KAFKA-2927: reduce system test storage footprint

Split kafka logging into two levels - DEBUG and INFO, and do not collect 
DEBUG by default.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/confluentinc/kafka 
KAFKA-2927-reduce-log-footprint

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/657.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #657


commit 0dc3a1a367083f57f3cb6d8e1cd82571598d7108
Author: Geoff Anderson 
Date:   2015-12-10T01:09:59Z

Split kafka logging into two levels - DEBUG and INFO, and do not collect 
DEBUG by default




> System tests: reduce storage footprint of collected logs
> 
>
> Key: KAFKA-2927
> URL: https://issues.apache.org/jira/browse/KAFKA-2927
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
>
> Looking at recent night test runs (testing.confluent.io/kafka), the storage 
> requirements for log output from the various services has increased 
> significantly, up to 7-10G for a single test run, up from hundreds of MB
> Current breakdown:
> 23M   Benchmark
> 3.2M  ClientCompatibilityTest
> 613M  ConnectDistributedTest
> 1.1M  ConnectRestApiTest
> 1.5M  ConnectStandaloneFileTest
> 2.0M  ConsoleConsumerTest
> 440K  KafkaVersionTest
> 744K  Log4jAppenderTest
> 49M   QuotaTest
> 3.0G  ReplicationTest
> 1.2G  TestMirrorMakerService
> 185M  TestUpgrade
> 372K  TestVerifiableProducer
> 2.3G  VerifiableConsumerTest
> The biggest contributors in these test suites:
> ReplicationTest:
> verifiable_producer.log (currently TRACE level)
> VerifiableConsumerTest:
> kafka server.log
> TestMirrorMakerService:
> verifiable_producer.log
> ConnectDistributedTest:
> kafka server.log
> The worst offenders are therefore 
> verifiable_producer.log which is logging at TRACE level, and kafka server.log 
> which is logging at debug level
> One solution is to:
> 1) Update the log4j configs to log separately to both an INFO level file, and 
> another file for DEBUG at least for the worst offenders.
> 2) Don't collect these DEBUG (and below) logs by default; only mark for 
> collection during failure



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2927: reduce system test storage footpri...

2015-12-09 Thread granders
GitHub user granders opened a pull request:

https://github.com/apache/kafka/pull/657

KAFKA-2927: reduce system test storage footprint

Split kafka logging into two levels - DEBUG and INFO, and do not collect 
DEBUG by default.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/confluentinc/kafka 
KAFKA-2927-reduce-log-footprint

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/657.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #657


commit 0dc3a1a367083f57f3cb6d8e1cd82571598d7108
Author: Geoff Anderson 
Date:   2015-12-10T01:09:59Z

Split kafka logging into two levels - DEBUG and INFO, and do not collect 
DEBUG by default




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2837) FAILING TEST: kafka.api.ProducerBounceTest > testBrokerFailure

2015-12-09 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049652#comment-15049652
 ] 

Guozhang Wang commented on KAFKA-2837:
--

[~jinxing6...@126.com] Left some comments in the PR.

> FAILING TEST: kafka.api.ProducerBounceTest > testBrokerFailure 
> ---
>
> Key: KAFKA-2837
> URL: https://issues.apache.org/jira/browse/KAFKA-2837
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.9.0.0
>Reporter: Gwen Shapira
>Assignee: jin xing
>  Labels: newbie
> Fix For: 0.9.0.1, 0.9.1.0
>
>
> {code}
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> kafka.api.ProducerBounceTest.testBrokerFailure(ProducerBounceTest.scala:117)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:105)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:56)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:64)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:50)
>   at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:106)
>   at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:360)
>   at 
> org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
>   at 
> org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(StoppableExecutorImpl.java:40)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at 

[jira] [Updated] (KAFKA-2578) Client Metadata internal state should be synchronized

2015-12-09 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-2578:
-
Reviewer: Ewen Cheslack-Postava

> Client Metadata internal state should be synchronized
> -
>
> Key: KAFKA-2578
> URL: https://issues.apache.org/jira/browse/KAFKA-2578
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Edward Ribeiro
>Priority: Trivial
>
> Some recent patches introduced a couple new fields in o.a.k.clients.Metadata: 
> 'listeners' and 'needMetadataForAllTopics'. Accessor methods for these fields 
> should be synchronized like the rest of the internal Metadata state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2928) system tests: failures in version-related sanity checks

2015-12-09 Thread Geoff Anderson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoff Anderson updated KAFKA-2928:
--
Status: Patch Available  (was: Open)

> system tests: failures in version-related sanity checks
> ---
>
> Key: KAFKA-2928
> URL: https://issues.apache.org/jira/browse/KAFKA-2928
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
>
> There have been a few consecutive failures of version-related sanity checks 
> in nightly system test runs:
> kafkatest.sanity_checks.test_verifiable_producer
> kafkatest.sanity_checks.test_kafka_version
> assert is_version(...) is failing
> utils.util.is_version is a fairly rough heuristic, so most likely this needs 
> to be updated.
> E.g., see
> http://testing.confluent.io/kafka/2015-12-01--001/
> (if this is broken, use 
> http://testing.confluent.io/kafka/2015-12-01--001.tar.gz)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2896) System test for partition re-assignment

2015-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049793#comment-15049793
 ] 

ASF GitHub Bot commented on KAFKA-2896:
---

GitHub user apovzner opened a pull request:

https://github.com/apache/kafka/pull/655

KAFKA-2896 Added system test for partition re-assignment

Partition re-assignment tests with and without broker failure.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apovzner/kafka kafka_2896

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/655.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #655


commit bddca8055a70ccc4385e7898fd6ff2eb38db
Author: Anna Povzner 
Date:   2015-12-10T01:06:11Z

KAFKA-2896 Added system test for partition re-assignment




> System test for partition re-assignment
> ---
>
> Key: KAFKA-2896
> URL: https://issues.apache.org/jira/browse/KAFKA-2896
> Project: Kafka
>  Issue Type: Task
>Reporter: Gwen Shapira
>Assignee: Anna Povzner
>
> Lots of users depend on partition re-assignment tool to manage their cluster. 
> Will be nice to have a simple system tests that creates a topic with few 
> partitions and few replicas, reassigns everything and validates the ISR 
> afterwards. 
> Just to make sure we are not breaking anything. Especially since we have 
> plans to improve (read: modify) this area.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2928) system tests: failures in version-related sanity checks

2015-12-09 Thread Geoff Anderson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoff Anderson reassigned KAFKA-2928:
-

Assignee: Geoff Anderson

> system tests: failures in version-related sanity checks
> ---
>
> Key: KAFKA-2928
> URL: https://issues.apache.org/jira/browse/KAFKA-2928
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
>
> There have been a few consecutive failures of version-related sanity checks 
> in nightly system test runs:
> kafkatest.sanity_checks.test_verifiable_producer
> kafkatest.sanity_checks.test_kafka_version
> assert is_version(...) is failing
> utils.util.is_version is a fairly rough heuristic, so most likely this needs 
> to be updated.
> E.g., see
> http://testing.confluent.io/kafka/2015-12-01--001/
> (if this is broken, use 
> http://testing.confluent.io/kafka/2015-12-01--001.tar.gz)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2653) Stateful operations in the KStream DSL layer

2015-12-09 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-2653:
-
Description: 
This includes the interface design the implementation for stateful operations 
including:

0. table representation in KStream.
1. stream-stream join.
2. stream-table join.
3. table-table join.
4. stream / table aggregations.

With 0 and 3 being tackled in KAFKA-2856 and KAFKA-2962 separately, this ticket 
is going to only focus on windowing definition and 1 / 2 / 4 above.

  was:
This includes the interface design the implementation for stateful operations 
including:

0. table representation in KStream.
1. stream-stream join.
2. stream-table join.
3. table-table join.
4. stream / table aggregations.


> Stateful operations in the KStream DSL layer
> 
>
> Key: KAFKA-2653
> URL: https://issues.apache.org/jira/browse/KAFKA-2653
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>
> This includes the interface design the implementation for stateful operations 
> including:
> 0. table representation in KStream.
> 1. stream-stream join.
> 2. stream-table join.
> 3. table-table join.
> 4. stream / table aggregations.
> With 0 and 3 being tackled in KAFKA-2856 and KAFKA-2962 separately, this 
> ticket is going to only focus on windowing definition and 1 / 2 / 4 above.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2974; `==` is used incorrectly in a few ...

2015-12-09 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/652


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2974) `==` is used incorrectly in a few places in Java code

2015-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15050039#comment-15050039
 ] 

ASF GitHub Bot commented on KAFKA-2974:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/652


> `==` is used incorrectly in a few places in Java code
> -
>
> Key: KAFKA-2974
> URL: https://issues.apache.org/jira/browse/KAFKA-2974
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>
> Unlike Scala, `==` is reference equality in Java and one normally wants to 
> use `equals`. We should fix the cases where `==` is used incorrectly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2733) Distinguish metric names inside the sensor registry

2015-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15050151#comment-15050151
 ] 

ASF GitHub Bot commented on KAFKA-2733:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/643


> Distinguish metric names inside the sensor registry
> ---
>
> Key: KAFKA-2733
> URL: https://issues.apache.org/jira/browse/KAFKA-2733
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
> Fix For: 0.9.1.0
>
>
> Since stream tasks can share the same StreamingMetrics object, and the 
> MetricName is distinguishable only by the group name (same for the same type 
> of states, and for other streaming metrics) and the tags (currently only the 
> client-ids of the StreamThead), when we have multiple tasks within a single 
> stream thread, it could lead to IllegalStateException upon trying to registry 
> the same metric from those tasks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2733) Distinguish metric names inside the sensor registry

2015-12-09 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-2733.
--
Resolution: Fixed

Issue resolved by pull request 643
[https://github.com/apache/kafka/pull/643]

> Distinguish metric names inside the sensor registry
> ---
>
> Key: KAFKA-2733
> URL: https://issues.apache.org/jira/browse/KAFKA-2733
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
> Fix For: 0.9.1.0
>
>
> Since stream tasks can share the same StreamingMetrics object, and the 
> MetricName is distinguishable only by the group name (same for the same type 
> of states, and for other streaming metrics) and the tags (currently only the 
> client-ids of the StreamThead), when we have multiple tasks within a single 
> stream thread, it could lead to IllegalStateException upon trying to registry 
> the same metric from those tasks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2976) Mirror maker dies if we delete a topic from destination cluster

2015-12-09 Thread Mayuresh Gharat (JIRA)
Mayuresh Gharat created KAFKA-2976:
--

 Summary: Mirror maker dies if we delete a topic from destination 
cluster
 Key: KAFKA-2976
 URL: https://issues.apache.org/jira/browse/KAFKA-2976
 Project: Kafka
  Issue Type: Bug
  Components: clients
Reporter: Mayuresh Gharat
Assignee: Mayuresh Gharat


In datapipeline,

1) Suppose the  Mirror Maker is producing to a cluster with Topic T and has 128 
partitions (Partition 0 to Partition 127) . The default setting on creation of 
a new topic on that cluster is 8 partitions.
2) After we delete the topic, the topic gets recreated with 8 partitions 
(Partition 0 to Partition 7).
3) The RecordAccumulator has batches for partitions from 9 to 127. Those 
batches get expired and the mirror makers will die to avoid data loss.

We need a way to reassign those batches (batches for Partition 9 top Partition 
127) in the RecordAccumulator to the newly created Topic T with 8 partitions 
(Partition 0 to Partition 7).  




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2975) The newtorkClient should request a metadata update after it gets an error in the handleResponse()

2015-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049939#comment-15049939
 ] 

ASF GitHub Bot commented on KAFKA-2975:
---

GitHub user MayureshGharat opened a pull request:

https://github.com/apache/kafka/pull/658

KAFKA-2975

The newtorkClient should request a metadata update after it gets an error 
in the handleResponse().

Currently in data pipeline,
1) Lets say Mirror Maker requestTimeout is set to 2 min and metadataExpiry 
is set to 5 min
2) We delete a topic, the Mirror Maker get UNKNOWN_TOPIC_PARTITION and 
tries torefresh its Metadata.
3) It gets LeaderNotAvailableException, may be because the topic is not 
created yet.
4) Now its metadata does not have any information about that topic.
5) It will wait for 5 min to do the next refresh.
6) In the mean time the batches sitting in the accumulator will expire and 
the mirror makers die to avoid data loss.

To overcome this we need to refresh the metadata after 3).

Well there is an alternative solution to have the metadataExpiry set to be 
less then requestTimeout, but this will mean we make more metadataRequest over 
the wire in normal scenario as well.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/MayureshGharat/kafka kafka-2975

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/658.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #658


commit 8c7534b5ff89960e26db22a27e183d10002aeb01
Author: Mayuresh Gharat 
Date:   2015-12-10T03:04:37Z

The newtorkClient should request a metadata update after it gets an error 
in response




> The newtorkClient should request a metadata update after it gets an error in 
> the handleResponse()
> -
>
> Key: KAFKA-2975
> URL: https://issues.apache.org/jira/browse/KAFKA-2975
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: Mayuresh Gharat
>Assignee: Mayuresh Gharat
>
> Currently in data pipeline, 
> 1) Lets say Mirror Maker requestTimeout is set to 2 min and metadataExpiry is 
> set to 5 min
> 2) We delete a topic, the Mirror Maker get UNKNOWN_TOPIC_PARTITION and tries 
> torefresh its Metadata.
> 3) It gets LeaderNotAvailableException, may be because the topic is not 
> created yet.
> 4) Now its metadata does not have any information about that topic.
> 5) It will wait for 5 min to do the next refresh.
> 6) In the mean time the batches sitting in the accumulator will expire and 
> the mirror makers die to avoid data loss.
> To overcome this we need to refresh the metadata after 3).
> Well there is an alternative solution to have the metadataExpiry set to be 
> less then requestTimeout, but this will mean we make more metadataRequest 
> over the wire in normal scenario as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2975

2015-12-09 Thread MayureshGharat
GitHub user MayureshGharat opened a pull request:

https://github.com/apache/kafka/pull/658

KAFKA-2975

The newtorkClient should request a metadata update after it gets an error 
in the handleResponse().

Currently in data pipeline,
1) Lets say Mirror Maker requestTimeout is set to 2 min and metadataExpiry 
is set to 5 min
2) We delete a topic, the Mirror Maker get UNKNOWN_TOPIC_PARTITION and 
tries torefresh its Metadata.
3) It gets LeaderNotAvailableException, may be because the topic is not 
created yet.
4) Now its metadata does not have any information about that topic.
5) It will wait for 5 min to do the next refresh.
6) In the mean time the batches sitting in the accumulator will expire and 
the mirror makers die to avoid data loss.

To overcome this we need to refresh the metadata after 3).

Well there is an alternative solution to have the metadataExpiry set to be 
less then requestTimeout, but this will mean we make more metadataRequest over 
the wire in normal scenario as well.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/MayureshGharat/kafka kafka-2975

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/658.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #658


commit 8c7534b5ff89960e26db22a27e183d10002aeb01
Author: Mayuresh Gharat 
Date:   2015-12-10T03:04:37Z

The newtorkClient should request a metadata update after it gets an error 
in response




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-2974) `==` is used incorrectly in a few places in Java code

2015-12-09 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-2974:
-
   Resolution: Fixed
Fix Version/s: 0.9.0.1
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 652
[https://github.com/apache/kafka/pull/652]

> `==` is used incorrectly in a few places in Java code
> -
>
> Key: KAFKA-2974
> URL: https://issues.apache.org/jira/browse/KAFKA-2974
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.9.0.1
>
>
> Unlike Scala, `==` is reference equality in Java and one normally wants to 
> use `equals`. We should fix the cases where `==` is used incorrectly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #221

2015-12-09 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2974; `==` is used incorrectly in a few places in Java code

--
[...truncated 2869 lines...]

kafka.server.DynamicConfigChangeTest > testConfigChangeOnNonExistingTopic PASSED

kafka.server.DynamicConfigChangeTest > testConfigChange PASSED

kafka.server.HighwatermarkPersistenceTest > 
testHighWatermarkPersistenceMultiplePartitions PASSED

kafka.server.HighwatermarkPersistenceTest > 
testHighWatermarkPersistenceSinglePartition PASSED

kafka.server.LogRecoveryTest > testHWCheckpointNoFailuresMultipleLogSegments 
PASSED

kafka.server.LogRecoveryTest > testHWCheckpointWithFailuresMultipleLogSegments 
PASSED

kafka.server.LogRecoveryTest > testHWCheckpointNoFailuresSingleLogSegment PASSED

kafka.server.LogRecoveryTest > testHWCheckpointWithFailuresSingleLogSegment 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithCollision PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokerListWithNoTopics PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testGetAllTopicMetadata 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testTopicMetadataRequest 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize PASSED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests PASSED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch PASSED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression PASSED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic PASSED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopicWithCollision 
PASSED

kafka.integration.PlaintextTopicMetadataTest > testAliveBrokerListWithNoTopics 
PASSED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.PlaintextTopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.PlaintextTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.PlaintextTopicMetadataTest > testTopicMetadataRequest PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.SslTopicMetadataTest > testIsrAfterBrokerShutDownAndJoinsBack 
PASSED

kafka.integration.SslTopicMetadataTest > testAutoCreateTopicWithCollision PASSED

kafka.integration.SslTopicMetadataTest > testAliveBrokerListWithNoTopics PASSED

kafka.integration.SslTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SslTopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.SslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.SslTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.SslTopicMetadataTest > testTopicMetadataRequest PASSED

kafka.integration.SslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.RollingBounceTest > testRollingBounce PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
PASSED

kafka.integration.MinIsrConfigTest > testDefaultKafkaConfig PASSED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionEnabled 
PASSED
ERROR: Could not install JDK1_8_0_45_HOME
java.lang.NullPointerException
at 
hudson.plugins.toolenv.ToolEnvBuildWrapper$1.buildEnvVars(ToolEnvBuildWrapper.java:46)
at hudson.model.AbstractBuild.getEnvironment(AbstractBuild.java:941)
at hudson.plugins.git.GitSCM.getParamExpandedRepos(GitSCM.java:390)
at 
hudson.plugins.git.GitSCM.compareRemoteRevisionWithImpl(GitSCM.java:577)
at hudson.plugins.git.GitSCM.compareRemoteRevisionWith(GitSCM.java:527)
at 

Build failed in Jenkins: kafka-trunk-jdk8 #220

2015-12-09 Thread Apache Jenkins Server
See 

Changes:

[cshapi] MINOR: Trivial doc/ typo fixes.

--
[...truncated 32 lines...]
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.9/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:clean UP-TO-DATE
:clients:clean
:connect:clean UP-TO-DATE
:core:clean
:examples:clean
:log4j-appender:clean
:streams:clean
:tools:clean
:connect:api:clean
:connect:file:clean
:connect:json:clean
:connect:runtime:clean
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk8:clients:compileJavawarning: [options] bootstrap class path 
not set in conjunction with -source 1.7
Note: 

 uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
1 warning

:kafka-trunk-jdk8:clients:processResources UP-TO-DATE
:kafka-trunk-jdk8:clients:classes
:kafka-trunk-jdk8:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk8:clients:createVersionFile
:kafka-trunk-jdk8:clients:jar
:kafka-trunk-jdk8:core:compileJava UP-TO-DATE
:kafka-trunk-jdk8:core:compileScalaJava HotSpot(TM) 64-Bit Server VM warning: 
ignoring option MaxPermSize=512m; support was removed in 8.0

:79:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^
:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,

  ^
:37:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 expireTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP) {

  ^
:394:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
  if (value.expireTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

^
:273:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
if (offsetAndMetadata.commitTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

  ^
:301:
 a pure expression does nothing in statement position; you may be omitting 
necessary parentheses
ControllerStats.uncleanLeaderElectionRate
^
:302:
 a pure expression does nothing in statement position; you may be omitting 
necessary parentheses
ControllerStats.leaderElectionTimer
^
:74:
 value BLOCK_ON_BUFFER_FULL_CONFIG in object ProducerConfig is deprecated: see 
corresponding Javadoc for more information.
producerProps.put(ProducerConfig.BLOCK_ON_BUFFER_FULL_CONFIG, "true")
 ^
:195:
 value BLOCK_ON_BUFFER_FULL_CONFIG in object ProducerConfig is deprecated: see 

[jira] [Updated] (KAFKA-2975) The newtorkClient should request a metadata update after it gets an error in the handleResponse()

2015-12-09 Thread Mayuresh Gharat (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mayuresh Gharat updated KAFKA-2975:
---
Description: 
Currently in data pipeline, 
1) Lets say Mirror Maker requestTimeout is set to 2 min and metadataExpiry is 
set to 5 min
2) We delete a topic, the Mirror Maker get UNKNOWN_TOPIC_PARTITION and tries 
torefresh its Metadata.
3) It gets LeaderNotAvailableException, may be because the topic is not created 
yet.
4) Now its metadata does not have any information about that topic.
5) It will wait for 5 min to do the next refresh.
6) In the mean time the batches sitting in the accumulator will expire and the 
mirror makers die to avoid data loss.

To overcome this we need to refresh the metadata after 3) before the timeout 
kicks in.

Well there is an alternative solution to have the metadataExpiry set to be less 
then requestTimeout, but this will mean we make more metadataRequest over the 
wire in normal scenario as well.

  was:
Currently in data pipeline, 
1) Lets say Mirror Maker requestTimeout is set to 2 min and metadataExpiry is 
set to 5 min
2) We delete a topic, the Mirror Maker get UNKNOWN_TOPIC_PARTITION and tries 
torefresh its Metadata.
3) It gets LeaderNotAvailableException, may be because the topic is not created 
yet.
4) Now its metadata does not have any information about that topic.
5) It will wait for 5 min to do the next refresh.
6) In the mean tie the batches sitting in the accumulator will expire and the 
mirror makers die.

To overcome this we need to refresh the metadata after 3) before the timeout 
kicks in.

Well there is an alternative solution to have the metadataExpiry set to be less 
then requestTimeout, but this will mean we make more metadataRequest over the 
wire in normal scenario as well.


> The newtorkClient should request a metadata update after it gets an error in 
> the handleResponse()
> -
>
> Key: KAFKA-2975
> URL: https://issues.apache.org/jira/browse/KAFKA-2975
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: Mayuresh Gharat
>Assignee: Mayuresh Gharat
>
> Currently in data pipeline, 
> 1) Lets say Mirror Maker requestTimeout is set to 2 min and metadataExpiry is 
> set to 5 min
> 2) We delete a topic, the Mirror Maker get UNKNOWN_TOPIC_PARTITION and tries 
> torefresh its Metadata.
> 3) It gets LeaderNotAvailableException, may be because the topic is not 
> created yet.
> 4) Now its metadata does not have any information about that topic.
> 5) It will wait for 5 min to do the next refresh.
> 6) In the mean time the batches sitting in the accumulator will expire and 
> the mirror makers die to avoid data loss.
> To overcome this we need to refresh the metadata after 3) before the timeout 
> kicks in.
> Well there is an alternative solution to have the metadataExpiry set to be 
> less then requestTimeout, but this will mean we make more metadataRequest 
> over the wire in normal scenario as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2975) The newtorkClient should request a metadata update after it gets an error in the handleResponse()

2015-12-09 Thread Mayuresh Gharat (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mayuresh Gharat updated KAFKA-2975:
---
Description: 
Currently in data pipeline, 
1) Lets say Mirror Maker requestTimeout is set to 2 min and metadataExpiry is 
set to 5 min
2) We delete a topic, the Mirror Maker get UNKNOWN_TOPIC_PARTITION and tries 
torefresh its Metadata.
3) It gets LeaderNotAvailableException, may be because the topic is not created 
yet.
4) Now its metadata does not have any information about that topic.
5) It will wait for 5 min to do the next refresh.
6) In the mean time the batches sitting in the accumulator will expire and the 
mirror makers die to avoid data loss.

To overcome this we need to refresh the metadata after 3).

Well there is an alternative solution to have the metadataExpiry set to be less 
then requestTimeout, but this will mean we make more metadataRequest over the 
wire in normal scenario as well.

  was:
Currently in data pipeline, 
1) Lets say Mirror Maker requestTimeout is set to 2 min and metadataExpiry is 
set to 5 min
2) We delete a topic, the Mirror Maker get UNKNOWN_TOPIC_PARTITION and tries 
torefresh its Metadata.
3) It gets LeaderNotAvailableException, may be because the topic is not created 
yet.
4) Now its metadata does not have any information about that topic.
5) It will wait for 5 min to do the next refresh.
6) In the mean time the batches sitting in the accumulator will expire and the 
mirror makers die to avoid data loss.

To overcome this we need to refresh the metadata after 3) before the timeout 
kicks in.

Well there is an alternative solution to have the metadataExpiry set to be less 
then requestTimeout, but this will mean we make more metadataRequest over the 
wire in normal scenario as well.


> The newtorkClient should request a metadata update after it gets an error in 
> the handleResponse()
> -
>
> Key: KAFKA-2975
> URL: https://issues.apache.org/jira/browse/KAFKA-2975
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: Mayuresh Gharat
>Assignee: Mayuresh Gharat
>
> Currently in data pipeline, 
> 1) Lets say Mirror Maker requestTimeout is set to 2 min and metadataExpiry is 
> set to 5 min
> 2) We delete a topic, the Mirror Maker get UNKNOWN_TOPIC_PARTITION and tries 
> torefresh its Metadata.
> 3) It gets LeaderNotAvailableException, may be because the topic is not 
> created yet.
> 4) Now its metadata does not have any information about that topic.
> 5) It will wait for 5 min to do the next refresh.
> 6) In the mean time the batches sitting in the accumulator will expire and 
> the mirror makers die to avoid data loss.
> To overcome this we need to refresh the metadata after 3).
> Well there is an alternative solution to have the metadataExpiry set to be 
> less then requestTimeout, but this will mean we make more metadataRequest 
> over the wire in normal scenario as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2975) The newtorkClient should request a metadata update after it gets an error in the handleResponse()

2015-12-09 Thread Mayuresh Gharat (JIRA)
Mayuresh Gharat created KAFKA-2975:
--

 Summary: The newtorkClient should request a metadata update after 
it gets an error in the handleResponse()
 Key: KAFKA-2975
 URL: https://issues.apache.org/jira/browse/KAFKA-2975
 Project: Kafka
  Issue Type: Bug
  Components: clients
Reporter: Mayuresh Gharat
Assignee: Mayuresh Gharat


Currently in data pipeline, 
1) Lets say Mirror Maker requestTimeout is set to 2 min and metadataExpiry is 
set to 5 min
2) We delete a topic, the Mirror Maker get UNKNOWN_TOPIC_PARTITION and tries 
torefresh its Metadata.
3) It gets LeaderNotAvailableException, may be because the topic is not created 
yet.
4) Now its metadata does not have any information about that topic.
5) It will wait for 5 min to do the next refresh.
6) In the mean tie the batches sitting in the accumulator will expire and the 
mirror makers die.

To overcome this we need to refresh the metadata after 3) before the timeout 
kicks in.

Well there is an alternative solution to have the metadataExpiry set to be less 
then requestTimeout, but this will mean we make more metadataRequest over the 
wire in normal scenario as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2977) Transient Failure in kafka.log.LogCleanerIntegrationTest.cleanerTest

2015-12-09 Thread jin xing (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jin xing reassigned KAFKA-2977:
---

Assignee: jin xing

> Transient Failure in kafka.log.LogCleanerIntegrationTest.cleanerTest
> 
>
> Key: KAFKA-2977
> URL: https://issues.apache.org/jira/browse/KAFKA-2977
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Assignee: jin xing
>
> {code}
> java.lang.AssertionError: log cleaner should have processed up to offset 599
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> kafka.log.LogCleanerIntegrationTest.cleanerTest(LogCleanerIntegrationTest.scala:76)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at org.junit.runners.Suite.runChild(Suite.java:128)
>   at org.junit.runners.Suite.runChild(Suite.java:27)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:105)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:56)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:64)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:50)
>   at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:106)
>   at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:360)
>   at 
> org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
>   at 
> 

[GitHub] kafka pull request: HOTFIX: fix table-table outer join and left jo...

2015-12-09 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/653


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Jenkins build is back to normal : kafka-trunk-jdk7 #890

2015-12-09 Thread Apache Jenkins Server
See 



[jira] [Updated] (KAFKA-2875) Class path contains multiple SLF4J bindings warnings when using scripts under bin

2015-12-09 Thread jin xing (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jin xing updated KAFKA-2875:

   Labels: patch  (was: )
Fix Version/s: 0.9.0.1
   Status: Patch Available  (was: Open)

> Class path contains multiple SLF4J bindings warnings when using scripts under 
> bin
> -
>
> Key: KAFKA-2875
> URL: https://issues.apache.org/jira/browse/KAFKA-2875
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>Assignee: jin xing
>Priority: Minor
>  Labels: patch
> Fix For: 0.9.0.1
>
>
> This adds a lot of noise when running the scripts, see example when running 
> kafka-console-producer.sh:
> {code}
> ~/D/s/kafka-0.9.0.0-src ❯❯❯ ./bin/kafka-console-producer.sh --topic topic 
> --broker-list localhost:9092 ⏎
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/core/build/dependant-libs-2.10.5/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/core/build/dependant-libs-2.10.5/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/tools/build/dependant-libs-2.10.5/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/api/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/runtime/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/file/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/json/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2974) `==` is used incorrectly in a few places in Java code

2015-12-09 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2974:
---
Reviewer: Guozhang Wang

> `==` is used incorrectly in a few places in Java code
> -
>
> Key: KAFKA-2974
> URL: https://issues.apache.org/jira/browse/KAFKA-2974
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.9.0.1
>
>
> Unlike Scala, `==` is reference equality in Java and one normally wants to 
> use `equals`. We should fix the cases where `==` is used incorrectly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2733: Standardize metric name for Kafka ...

2015-12-09 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/643


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-2977) Transient Failure in kafka.log.LogCleanerIntegrationTest.cleanerTest

2015-12-09 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-2977:


 Summary: Transient Failure in 
kafka.log.LogCleanerIntegrationTest.cleanerTest
 Key: KAFKA-2977
 URL: https://issues.apache.org/jira/browse/KAFKA-2977
 Project: Kafka
  Issue Type: Sub-task
Reporter: Guozhang Wang


{code}
java.lang.AssertionError: log cleaner should have processed up to offset 599
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
kafka.log.LogCleanerIntegrationTest.cleanerTest(LogCleanerIntegrationTest.scala:76)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.junit.runners.Suite.runChild(Suite.java:128)
at org.junit.runners.Suite.runChild(Suite.java:27)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:105)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:56)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:64)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:50)
at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
at 
org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:106)
at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.messaging.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:360)
at 
org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
at 
org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(StoppableExecutorImpl.java:40)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at 

[jira] [Commented] (KAFKA-2772) Stabilize replication hard bounce test

2015-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049261#comment-15049261
 ] 

ASF GitHub Bot commented on KAFKA-2772:
---

Github user granders closed the pull request at:

https://github.com/apache/kafka/pull/481


> Stabilize replication hard bounce test
> --
>
> Key: KAFKA-2772
> URL: https://issues.apache.org/jira/browse/KAFKA-2772
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
>Priority: Minor
>
> There have been several spurious failures of replication tests during runs of 
> kafka system tests (see for example 
> http://testing.confluent.io/kafka/2015-11-07--001/)
> {code:title=report.txt}
> Expected producer to still be producing.
> Traceback (most recent call last):
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.3.8-py2.7.egg/ducktape/tests/runner.py",
>  line 101, in run_all_tests
> result.data = self.run_single_test()
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.3.8-py2.7.egg/ducktape/tests/runner.py",
>  line 151, in run_single_test
> return self.current_test_context.function(self.current_test)
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.3.8-py2.7.egg/ducktape/mark/_mark.py",
>  line 331, in wrapper
> return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs)
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/tests/kafkatest/tests/replication_test.py",
>  line 132, in test_replication_with_broker_failure
> self.run_produce_consume_validate(core_test_action=lambda: 
> failures[failure_mode](self))
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/tests/kafkatest/tests/produce_consume_validate.py",
>  line 65, in run_produce_consume_validate
> self.stop_producer_and_consumer()
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/tests/kafkatest/tests/produce_consume_validate.py",
>  line 55, in stop_producer_and_consumer
> err_msg="Expected producer to still be producing.")
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.3.8-py2.7.egg/ducktape/utils/util.py",
>  line 36, in wait_until
> raise TimeoutError(err_msg)
> TimeoutError: Expected producer to still be producing.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2772: Stabilize failures on replication ...

2015-12-09 Thread granders
Github user granders closed the pull request at:

https://github.com/apache/kafka/pull/481


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Jenkins build is back to normal : kafka-trunk-jdk8 #217

2015-12-09 Thread Apache Jenkins Server
See 



[jira] [Updated] (KAFKA-2972) ControlledShutdownResponse always deserialises `partitionsRemaining` as empty

2015-12-09 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2972:
---
Fix Version/s: (was: 0.9.0.1)
   0.9.1.0

> ControlledShutdownResponse always deserialises `partitionsRemaining` as empty
> -
>
> Key: KAFKA-2972
> URL: https://issues.apache.org/jira/browse/KAFKA-2972
> Project: Kafka
>  Issue Type: Bug
>  Components: network
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.9.1.0
>
>
> This was a regression introduced when moving to Java request/response classes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2880) Fetcher.getTopicMetadata NullPointerException when broker cannot be reached

2015-12-09 Thread Devin Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049289#comment-15049289
 ] 

Devin Smith commented on KAFKA-2880:


Looks like it's already documented, but ran into this same issue with 
listTopics:

{code}
! Caused by: java.lang.NullPointerException
! at 
org.apache.kafka.common.requests.MetadataResponse.(MetadataResponse.java:130)
! at 
org.apache.kafka.clients.consumer.internals.Fetcher.getTopicMetadata(Fetcher.java:203)
! at 
org.apache.kafka.clients.consumer.internals.Fetcher.getAllTopicMetadata(Fetcher.java:180)
! at 
org.apache.kafka.clients.consumer.KafkaConsumer.listTopics(KafkaConsumer.java:1162)
{code}

> Fetcher.getTopicMetadata NullPointerException when broker cannot be reached
> ---
>
> Key: KAFKA-2880
> URL: https://issues.apache.org/jira/browse/KAFKA-2880
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Ewen Cheslack-Postava
>Assignee: Jason Gustafson
> Fix For: 0.9.0.1
>
>
> The Fetcher class will throw a NullPointerException if a broker cannot be 
> reached:
> {quote}
> Exception in thread "main" java.lang.NullPointerException
> at 
> org.apache.kafka.common.requests.MetadataResponse.(MetadataResponse.java:130)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.getTopicMetadata(Fetcher.java:203)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.partitionsFor(KafkaConsumer.java:1143)
> at 
> org.apache.kafka.connect.util.KafkaBasedLog.start(KafkaBasedLog.java:126)
> at 
> org.apache.kafka.connect.storage.KafkaOffsetBackingStore.start(KafkaOffsetBackingStore.java:85)
> at org.apache.kafka.connect.runtime.Worker.start(Worker.java:108)
> at org.apache.kafka.connect.runtime.Connect.start(Connect.java:56)
> at 
> org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:62)
> {quote}
> This is trivially reproduced by trying to start Kafka Connect in distributed 
> mode (i.e. connect-distributed.sh config/connect-distributed.properties) with 
> no broker running. However, it's not specific to Kafka Connect, it just 
> happens to use the consumer in a way that triggers it reliably.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-2880) Fetcher.getTopicMetadata NullPointerException when broker cannot be reached

2015-12-09 Thread Devin Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049289#comment-15049289
 ] 

Devin Smith edited comment on KAFKA-2880 at 12/9/15 7:52 PM:
-

Looks like it's already documented, but ran into this same issue with 
listTopics:

{noformat}
! Caused by: java.lang.NullPointerException
! at 
org.apache.kafka.common.requests.MetadataResponse.(MetadataResponse.java:130)
! at 
org.apache.kafka.clients.consumer.internals.Fetcher.getTopicMetadata(Fetcher.java:203)
! at 
org.apache.kafka.clients.consumer.internals.Fetcher.getAllTopicMetadata(Fetcher.java:180)
! at 
org.apache.kafka.clients.consumer.KafkaConsumer.listTopics(KafkaConsumer.java:1162)
{noformat}


was (Author: drsmith):
Looks like it's already documented, but ran into this same issue with 
listTopics:

{code}
! Caused by: java.lang.NullPointerException
! at 
org.apache.kafka.common.requests.MetadataResponse.(MetadataResponse.java:130)
! at 
org.apache.kafka.clients.consumer.internals.Fetcher.getTopicMetadata(Fetcher.java:203)
! at 
org.apache.kafka.clients.consumer.internals.Fetcher.getAllTopicMetadata(Fetcher.java:180)
! at 
org.apache.kafka.clients.consumer.KafkaConsumer.listTopics(KafkaConsumer.java:1162)
{code}

> Fetcher.getTopicMetadata NullPointerException when broker cannot be reached
> ---
>
> Key: KAFKA-2880
> URL: https://issues.apache.org/jira/browse/KAFKA-2880
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Ewen Cheslack-Postava
>Assignee: Jason Gustafson
> Fix For: 0.9.0.1
>
>
> The Fetcher class will throw a NullPointerException if a broker cannot be 
> reached:
> {quote}
> Exception in thread "main" java.lang.NullPointerException
> at 
> org.apache.kafka.common.requests.MetadataResponse.(MetadataResponse.java:130)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.getTopicMetadata(Fetcher.java:203)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.partitionsFor(KafkaConsumer.java:1143)
> at 
> org.apache.kafka.connect.util.KafkaBasedLog.start(KafkaBasedLog.java:126)
> at 
> org.apache.kafka.connect.storage.KafkaOffsetBackingStore.start(KafkaOffsetBackingStore.java:85)
> at org.apache.kafka.connect.runtime.Worker.start(Worker.java:108)
> at org.apache.kafka.connect.runtime.Connect.start(Connect.java:56)
> at 
> org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:62)
> {quote}
> This is trivially reproduced by trying to start Kafka Connect in distributed 
> mode (i.e. connect-distributed.sh config/connect-distributed.properties) with 
> no broker running. However, it's not specific to Kafka Connect, it just 
> happens to use the consumer in a way that triggers it reliably.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2972) ControlledShutdownResponse always deserialises `partitionsRemaining` as empty

2015-12-09 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049292#comment-15049292
 ] 

Ismael Juma commented on KAFKA-2972:


Changed fix version as the Java instance of the response is not used to 
serialise the data in 0.9.0 and the problem is at serialisation time.

> ControlledShutdownResponse always deserialises `partitionsRemaining` as empty
> -
>
> Key: KAFKA-2972
> URL: https://issues.apache.org/jira/browse/KAFKA-2972
> Project: Kafka
>  Issue Type: Bug
>  Components: network
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.9.1.0
>
>
> This was a regression introduced when moving to Java request/response classes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2972) ControlledShutdownResponse always serialises `partitionsRemaining` as empty

2015-12-09 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2972:
---
Summary: ControlledShutdownResponse always serialises `partitionsRemaining` 
as empty  (was: ControlledShutdownResponse always deserialises 
`partitionsRemaining` as empty)

> ControlledShutdownResponse always serialises `partitionsRemaining` as empty
> ---
>
> Key: KAFKA-2972
> URL: https://issues.apache.org/jira/browse/KAFKA-2972
> Project: Kafka
>  Issue Type: Bug
>  Components: network
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.9.1.0
>
>
> This was a regression introduced when moving to Java request/response classes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2972) ControlledShutdownResponse always serialises `partitionsRemaining` as empty

2015-12-09 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-2972.
--
   Resolution: Fixed
Fix Version/s: (was: 0.9.1.0)
   0.9.0.1

Issue resolved by pull request 649
[https://github.com/apache/kafka/pull/649]

> ControlledShutdownResponse always serialises `partitionsRemaining` as empty
> ---
>
> Key: KAFKA-2972
> URL: https://issues.apache.org/jira/browse/KAFKA-2972
> Project: Kafka
>  Issue Type: Bug
>  Components: network
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.9.0.1
>
>
> This only affects the Java response class which is not used for serialisation 
> in 0.9.0, but will be in 0.9.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2972) ControlledShutdownResponse always serialises `partitionsRemaining` as empty

2015-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049295#comment-15049295
 ] 

ASF GitHub Bot commented on KAFKA-2972:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/649


> ControlledShutdownResponse always serialises `partitionsRemaining` as empty
> ---
>
> Key: KAFKA-2972
> URL: https://issues.apache.org/jira/browse/KAFKA-2972
> Project: Kafka
>  Issue Type: Bug
>  Components: network
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.9.0.1
>
>
> This only affects the Java response class which is not used for serialisation 
> in 0.9.0, but will be in 0.9.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2972) ControlledShutdownResponse always serialises `partitionsRemaining` as empty

2015-12-09 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2972:
---
Description: This only affects the Java response class which is not used 
for serialisation in 0.9.0, but will be in 0.9.1.  (was: This was a regression 
introduced when moving to Java request/response classes.)

> ControlledShutdownResponse always serialises `partitionsRemaining` as empty
> ---
>
> Key: KAFKA-2972
> URL: https://issues.apache.org/jira/browse/KAFKA-2972
> Project: Kafka
>  Issue Type: Bug
>  Components: network
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.9.1.0
>
>
> This only affects the Java response class which is not used for serialisation 
> in 0.9.0, but will be in 0.9.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2972; Add missing `partitionsRemaingList...

2015-12-09 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/649


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2880) Fetcher.getTopicMetadata NullPointerException when broker cannot be reached

2015-12-09 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049298#comment-15049298
 ] 

Guozhang Wang commented on KAFKA-2880:
--

[~drsmith] This is fixed in current trunk, which will be included in the next 
point release (0.9.0.1).

> Fetcher.getTopicMetadata NullPointerException when broker cannot be reached
> ---
>
> Key: KAFKA-2880
> URL: https://issues.apache.org/jira/browse/KAFKA-2880
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Ewen Cheslack-Postava
>Assignee: Jason Gustafson
> Fix For: 0.9.0.1
>
>
> The Fetcher class will throw a NullPointerException if a broker cannot be 
> reached:
> {quote}
> Exception in thread "main" java.lang.NullPointerException
> at 
> org.apache.kafka.common.requests.MetadataResponse.(MetadataResponse.java:130)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.getTopicMetadata(Fetcher.java:203)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.partitionsFor(KafkaConsumer.java:1143)
> at 
> org.apache.kafka.connect.util.KafkaBasedLog.start(KafkaBasedLog.java:126)
> at 
> org.apache.kafka.connect.storage.KafkaOffsetBackingStore.start(KafkaOffsetBackingStore.java:85)
> at org.apache.kafka.connect.runtime.Worker.start(Worker.java:108)
> at org.apache.kafka.connect.runtime.Connect.start(Connect.java:56)
> at 
> org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:62)
> {quote}
> This is trivially reproduced by trying to start Kafka Connect in distributed 
> mode (i.e. connect-distributed.sh config/connect-distributed.properties) with 
> no broker running. However, it's not specific to Kafka Connect, it just 
> happens to use the consumer in a way that triggers it reliably.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2972) ControlledShutdownResponse always deserialises `partitionsRemaining` as empty

2015-12-09 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-2972:
--

 Summary: ControlledShutdownResponse always deserialises 
`partitionsRemaining` as empty
 Key: KAFKA-2972
 URL: https://issues.apache.org/jira/browse/KAFKA-2972
 Project: Kafka
  Issue Type: Bug
  Components: network
Affects Versions: 0.9.0.0
Reporter: Ismael Juma
Assignee: Ismael Juma
 Fix For: 0.9.0.1


This was a regression introduced when moving to Java request/response classes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2973) Fix leak of child sensors on remove

2015-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15048996#comment-15048996
 ] 

ASF GitHub Bot commented on KAFKA-2973:
---

GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/650

KAFKA-2973; Fix issue where `childrenSensors` is incorrectly updated



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
kafka-2973-fix-leak-child-sensors-on-remove

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/650.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #650


commit ef6a543edd4c14e44b8dd660b936a7efa8aeaee0
Author: Ismael Juma 
Date:   2015-12-09T16:39:49Z

Fix issue where `childrenSensors` was incorrectly updated




> Fix leak of child sensors on remove
> ---
>
> Key: KAFKA-2973
> URL: https://issues.apache.org/jira/browse/KAFKA-2973
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.9.0.1
>
>
> We added the ability to remove sensors from Kafka Metrics in 0.9.0.0. There 
> is, however, a bug in how we populate the `childrenSensors` map causing us to 
> leak some child sensors (all, but the last one added).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2973) Fix leak of child sensors on remove

2015-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049066#comment-15049066
 ] 

ASF GitHub Bot commented on KAFKA-2973:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/650


> Fix leak of child sensors on remove
> ---
>
> Key: KAFKA-2973
> URL: https://issues.apache.org/jira/browse/KAFKA-2973
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.9.0.1, 0.9.1.0
>
>
> We added the ability to remove sensors from Kafka Metrics in 0.9.0.0. There 
> is, however, a bug in how we populate the `childrenSensors` map causing us to 
> leak some child sensors (all, but the last one added).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2972; Add missing `partitionsRemaingList...

2015-12-09 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/649

KAFKA-2972; Add missing `partitionsRemaingList.add` in 
`ControlledShutdownResponse` constructor



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
KAFKA-2972-controlled-shutdown-response-bug

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/649.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #649


commit 82eb116122637e05221a8afbceae12d97cc1463d
Author: Ismael Juma 
Date:   2015-12-09T16:57:56Z

Add missing `partitionsRemaingList.add` in `ControlledShutdownResponse` 
constructor




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-2973) Fix leak of child sensors on remove

2015-12-09 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-2973:
--

 Summary: Fix leak of child sensors on remove
 Key: KAFKA-2973
 URL: https://issues.apache.org/jira/browse/KAFKA-2973
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 0.9.0.0
Reporter: Ismael Juma
Assignee: Ismael Juma
 Fix For: 0.9.0.1


We added the ability to remove sensors from Kafka Metrics in 0.9.0.0. There is, 
however, a bug in how we populate the `childrenSensors` map causing us to leak 
some child sensors (all, but the last one added).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2973) Fix leak of child sensors on remove

2015-12-09 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira resolved KAFKA-2973.
-
   Resolution: Fixed
Fix Version/s: 0.9.1.0

Issue resolved by pull request 650
[https://github.com/apache/kafka/pull/650]

> Fix leak of child sensors on remove
> ---
>
> Key: KAFKA-2973
> URL: https://issues.apache.org/jira/browse/KAFKA-2973
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.9.0.1, 0.9.1.0
>
>
> We added the ability to remove sensors from Kafka Metrics in 0.9.0.0. There 
> is, however, a bug in how we populate the `childrenSensors` map causing us to 
> leak some child sensors (all, but the last one added).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)