[jira] [Commented] (KAFKA-3137) Delete tombstones in log compacted topics may never get removed.

2016-01-22 Thread James Cheng (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15113521#comment-15113521
 ] 

James Cheng commented on KAFKA-3137:


Ah, I see the code. I'll run a test with a log compacted topic. If log 
compaction retains the modification timestamp of the previous files, then that 
eliminates the window I was was worried about.

> Delete tombstones in log compacted topics may never get removed.
> 
>
> Key: KAFKA-3137
> URL: https://issues.apache.org/jira/browse/KAFKA-3137
> Project: Kafka
>  Issue Type: Bug
>Reporter: James Cheng
>
> I spoke about this with [~junrao]. I haven't tried to reproduce this, but Jun 
> said that it looks like this is possible, so I'm filing it.
> Delete tombstones in log compacted topics are deleted after 
> delete.retention.ms (at the topic level) or log.cleaner.delete.retention.ms 
> (at the broker level).
> However, we don't have per-message timestamps (at least until KIP-32 is 
> implemented). So the timestamp of the log segment file is used as a proxy. 
> However, the modification time of the log segment changes whenever a 
> compaction run happens.
> It's possible then that if log compaction happens very frequently that 
> delete.retention.ms will never be reached. In that case, the delete 
> tombstones would stay around longer than the user expected. 
> I believe that means that log compaction would have to happen more frequently 
> than delete.retention.ms. The frequency of log compaction is some calculation 
> based on segment size, the criteria for segment roll (time or bytes), the 
> min.cleanable.dirty.ratio, as well as the amount of traffic coming into the 
> log compacted topic. So it's possible, but I'm not sure how likely.
> And I would imagine that this can't be fixed until KIP-32 is available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-42: Add Producer and Consumer Interceptors

2016-01-22 Thread Neha Narkhede
James,

That is one of the many monitoring use cases for the interceptor interface.

Thanks,
Neha

On Fri, Jan 22, 2016 at 6:18 PM, James Cheng  wrote:

> Anna,
>
> I'm trying to understand a concrete use case. It sounds like producer
> interceptors could be used to implement part of LinkedIn's Kafak Audit
> tool? https://engineering.linkedin.com/kafka/running-kafka-scale
>
> Part of that is done by a wrapper library around the kafka producer that
> keeps a count of the number of messages produced, and then sends that count
> to a side-topic. It sounds like the producer interceptors could possibly be
> used to implement that?
>
> -James
>
> > On Jan 22, 2016, at 4:33 PM, Anna Povzner  wrote:
> >
> > Hi,
> >
> > I just created a KIP-42 for adding producer and consumer interceptors for
> > intercepting messages at different points on producer and consumer.
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-42%3A+Add+Producer+and+Consumer+Interceptors
> >
> > Comments and suggestions are welcome!
> >
> > Thanks,
> > Anna
>
>
> 
>
> This email and any attachments may contain confidential and privileged
> material for the sole use of the intended recipient. Any review, copying,
> or distribution of this email (or any attachments) by others is prohibited.
> If you are not the intended recipient, please contact the sender
> immediately and permanently delete this email and any attachments. No
> employee or agent of TiVo Inc. is authorized to conclude any binding
> agreement on behalf of TiVo Inc. by email. Binding agreements with TiVo
> Inc. may only be made by a signed written agreement.
>



-- 
Thanks,
Neha


Re: [DISCUSS] KIP-42: Add Producer and Consumer Interceptors

2016-01-22 Thread James Cheng
Anna,

I'm trying to understand a concrete use case. It sounds like producer 
interceptors could be used to implement part of LinkedIn's Kafak Audit tool? 
https://engineering.linkedin.com/kafka/running-kafka-scale

Part of that is done by a wrapper library around the kafka producer that keeps 
a count of the number of messages produced, and then sends that count to a 
side-topic. It sounds like the producer interceptors could possibly be used to 
implement that?

-James

> On Jan 22, 2016, at 4:33 PM, Anna Povzner  wrote:
>
> Hi,
>
> I just created a KIP-42 for adding producer and consumer interceptors for
> intercepting messages at different points on producer and consumer.
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-42%3A+Add+Producer+and+Consumer+Interceptors
>
> Comments and suggestions are welcome!
>
> Thanks,
> Anna




This email and any attachments may contain confidential and privileged material 
for the sole use of the intended recipient. Any review, copying, or 
distribution of this email (or any attachments) by others is prohibited. If you 
are not the intended recipient, please contact the sender immediately and 
permanently delete this email and any attachments. No employee or agent of TiVo 
Inc. is authorized to conclude any binding agreement on behalf of TiVo Inc. by 
email. Binding agreements with TiVo Inc. may only be made by a signed written 
agreement.


Build failed in Jenkins: kafka-trunk-jdk8 #308

2016-01-22 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-3140: Fix PatternSyntaxException and hand caused by it in 
Mirro…

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H11 (docker Ubuntu ubuntu yahoo-not-h2) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision bc9237701b06768c119e954ddb4cd2e61c24e305 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f bc9237701b06768c119e954ddb4cd2e61c24e305
 > git rev-list c197113a9c04e2f6c2d1a72161c0d40d5804490e # timeout=10
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson4248277982161186294.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 12.074 secs
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson4562666494061739072.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.10/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:streams:examples:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk8:clients:compileJava
:jar_core_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '/home/jenkins/.gradle/caches/modules-2/files-2.1/net.jpountz.lz4/lz4/1.3.0/c708bb2590c0652a642236ef45d9f99ff842a2ce/lz4-1.3.0.jar'
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 14.646 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2


Build failed in Jenkins: kafka-trunk-jdk7 #982

2016-01-22 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-3140: Fix PatternSyntaxException and hand caused by it in 
Mirro…

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H11 (docker Ubuntu ubuntu yahoo-not-h2) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision bc9237701b06768c119e954ddb4cd2e61c24e305 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f bc9237701b06768c119e954ddb4cd2e61c24e305
 > git rev-list c197113a9c04e2f6c2d1a72161c0d40d5804490e # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson4905956535806482202.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 18.925 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson2345566592404878349.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.10/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:streams:examples:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk7:clients:compileJava
:jar_core_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '/home/jenkins/.gradle/caches/modules-2/files-2.1/net.jpountz.lz4/lz4/1.3.0/c708bb2590c0652a642236ef45d9f99ff842a2ce/lz4-1.3.0.jar'
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 23.071 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51


[jira] [Commented] (KAFKA-3110) can't set cluster acl for a user to CREATE topics without first creating a topic

2016-01-22 Thread Ashish K Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15113405#comment-15113405
 ] 

Ashish K Singh commented on KAFKA-3110:
---

[~tgraves] i have done this with trunk without any issue. Could you confirm if 
it still an issue for you?

> can't set cluster acl for a user to CREATE topics without first creating a 
> topic
> 
>
> Key: KAFKA-3110
> URL: https://issues.apache.org/jira/browse/KAFKA-3110
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Thomas Graves
>
> I started a new kafka cluster with security.  I tried to give a user cluster 
> CREATE permissions so they could create topics:
> kafka-acls.sh --authorizer-properties zookeeper.connect=host.com:2181 
> --cluster --add --operation CREATE --allow-principal User:myuser
> This failed with the error below and the broker ended up shutting down and 
> wouldn't restart without removing the zookeeper data.
> @40005699398806bd170c org.I0Itec.zkclient.exception.ZkNoNodeException: 
> org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
> NoNode for /kafka-acl/Topic
> To work around this you can first create any topic which creates the 
> zookeeper node and then after that you can give the user create permissions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[DISCUSS] KIP-42: Add Producer and Consumer Interceptors

2016-01-22 Thread Anna Povzner
Hi,

I just created a KIP-42 for adding producer and consumer interceptors for
intercepting messages at different points on producer and consumer.

https://cwiki.apache.org/confluence/display/KAFKA/KIP-42%3A+Add+Producer+and+Consumer+Interceptors

Comments and suggestions are welcome!

Thanks,
Anna


[jira] [Commented] (KAFKA-3141) kafka-acls.sh throws ArrayIndexOutOfBoundsException for an invalid authorizer-property

2016-01-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15113399#comment-15113399
 ] 

ASF GitHub Bot commented on KAFKA-3141:
---

GitHub user SinghAsDev opened a pull request:

https://github.com/apache/kafka/pull/806

KAFKA-3141: Skip misformed properties instead of throwing ArrayIndexO…

Skip misformed properties instead of throwing ArrayIndexOutOfBoundsException

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/SinghAsDev/kafka KAFKA-3141

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/806.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #806


commit c7ff10bdde3482db2d594faaa6760ea5c0137c0b
Author: Ashish Singh 
Date:   2016-01-23T00:24:15Z

KAFKA-3141: Skip misformed properties instead of throwing 
ArrayIndexOutOfBoundsException




> kafka-acls.sh throws ArrayIndexOutOfBoundsException for an invalid 
> authorizer-property
> --
>
> Key: KAFKA-3141
> URL: https://issues.apache.org/jira/browse/KAFKA-3141
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> kafka-acls.sh throws ArrayIndexOutOfBoundsException for an invalid 
> authorizer-property. ST below.
> {code}
> Error while executing topic Acl command 1
> java.lang.ArrayIndexOutOfBoundsException: 1
>   at 
> kafka.admin.AclCommand$$anonfun$withAuthorizer$2.apply(AclCommand.scala:65)
>   at 
> kafka.admin.AclCommand$$anonfun$withAuthorizer$2.apply(AclCommand.scala:65)
>   at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>   at kafka.admin.AclCommand$.withAuthorizer(AclCommand.scala:65)
>   at kafka.admin.AclCommand$.addAcl(AclCommand.scala:78)
>   at kafka.admin.AclCommand$.main(AclCommand.scala:48)
>   at kafka.admin.AclCommand.main(AclCommand.scala)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3140) PatternSyntaxException thrown in MM, causes MM to hang

2016-01-22 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira resolved KAFKA-3140.
-
   Resolution: Fixed
Fix Version/s: 0.9.1.0

Issue resolved by pull request 805
[https://github.com/apache/kafka/pull/805]

> PatternSyntaxException thrown in MM, causes MM to hang
> --
>
> Key: KAFKA-3140
> URL: https://issues.apache.org/jira/browse/KAFKA-3140
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
> Fix For: 0.9.1.0
>
>
> On passing an invalid java regex string as whitelist to MM, 
> PatternSyntaxException is thrown and MM hangs. Below is relevant ST.
> {code}
> java.util.regex.PatternSyntaxException: Dangling meta character '*' near 
> index 0
> *
> ^
>   at java.util.regex.Pattern.error(Pattern.java:1955)
>   at java.util.regex.Pattern.sequence(Pattern.java:2123)
>   at java.util.regex.Pattern.expr(Pattern.java:1996)
>   at java.util.regex.Pattern.compile(Pattern.java:1696)
>   at java.util.regex.Pattern.(Pattern.java:1351)
>   at java.util.regex.Pattern.compile(Pattern.java:1028)
>   at 
> kafka.tools.MirrorMaker$MirrorMakerNewConsumer.init(MirrorMaker.scala:521)
>   at kafka.tools.MirrorMaker$MirrorMakerThread.run(MirrorMaker.scala:389)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3140) PatternSyntaxException thrown in MM, causes MM to hang

2016-01-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15113398#comment-15113398
 ] 

ASF GitHub Bot commented on KAFKA-3140:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/805


> PatternSyntaxException thrown in MM, causes MM to hang
> --
>
> Key: KAFKA-3140
> URL: https://issues.apache.org/jira/browse/KAFKA-3140
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
> Fix For: 0.9.1.0
>
>
> On passing an invalid java regex string as whitelist to MM, 
> PatternSyntaxException is thrown and MM hangs. Below is relevant ST.
> {code}
> java.util.regex.PatternSyntaxException: Dangling meta character '*' near 
> index 0
> *
> ^
>   at java.util.regex.Pattern.error(Pattern.java:1955)
>   at java.util.regex.Pattern.sequence(Pattern.java:2123)
>   at java.util.regex.Pattern.expr(Pattern.java:1996)
>   at java.util.regex.Pattern.compile(Pattern.java:1696)
>   at java.util.regex.Pattern.(Pattern.java:1351)
>   at java.util.regex.Pattern.compile(Pattern.java:1028)
>   at 
> kafka.tools.MirrorMaker$MirrorMakerNewConsumer.init(MirrorMaker.scala:521)
>   at kafka.tools.MirrorMaker$MirrorMakerThread.run(MirrorMaker.scala:389)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3141: Skip misformed properties instead ...

2016-01-22 Thread SinghAsDev
GitHub user SinghAsDev opened a pull request:

https://github.com/apache/kafka/pull/806

KAFKA-3141: Skip misformed properties instead of throwing ArrayIndexO…

Skip misformed properties instead of throwing ArrayIndexOutOfBoundsException

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/SinghAsDev/kafka KAFKA-3141

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/806.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #806


commit c7ff10bdde3482db2d594faaa6760ea5c0137c0b
Author: Ashish Singh 
Date:   2016-01-23T00:24:15Z

KAFKA-3141: Skip misformed properties instead of throwing 
ArrayIndexOutOfBoundsException




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-3140: Fix PatternSyntaxException and han...

2016-01-22 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/805


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-3141) kafka-acls.sh throws ArrayIndexOutOfBoundsException for an invalid authorizer-property

2016-01-22 Thread Ashish K Singh (JIRA)
Ashish K Singh created KAFKA-3141:
-

 Summary: kafka-acls.sh throws ArrayIndexOutOfBoundsException for 
an invalid authorizer-property
 Key: KAFKA-3141
 URL: https://issues.apache.org/jira/browse/KAFKA-3141
 Project: Kafka
  Issue Type: Bug
Reporter: Ashish K Singh
Assignee: Ashish K Singh


kafka-acls.sh throws ArrayIndexOutOfBoundsException for an invalid 
authorizer-property. ST below.

{code}
Error while executing topic Acl command 1
java.lang.ArrayIndexOutOfBoundsException: 1
at 
kafka.admin.AclCommand$$anonfun$withAuthorizer$2.apply(AclCommand.scala:65)
at 
kafka.admin.AclCommand$$anonfun$withAuthorizer$2.apply(AclCommand.scala:65)
at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at kafka.admin.AclCommand$.withAuthorizer(AclCommand.scala:65)
at kafka.admin.AclCommand$.addAcl(AclCommand.scala:78)
at kafka.admin.AclCommand$.main(AclCommand.scala:48)
at kafka.admin.AclCommand.main(AclCommand.scala)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #307

2016-01-22 Thread Apache Jenkins Server
See 

Changes:

[me] KAFKA-3066: Demo Examples for Kafka Streams

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H11 (docker Ubuntu ubuntu yahoo-not-h2) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision c197113a9c04e2f6c2d1a72161c0d40d5804490e 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f c197113a9c04e2f6c2d1a72161c0d40d5804490e
 > git rev-list a19729fe61b23178c6f91135cb81901f76f982f0 # timeout=10
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson2540907725560568292.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 16.607 secs
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson2835799656596372570.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.10/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:streams:examples:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk8:clients:compileJava
:jar_core_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '/home/jenkins/.gradle/caches/modules-2/files-2.1/net.jpountz.lz4/lz4/1.3.0/c708bb2590c0652a642236ef45d9f99ff842a2ce/lz4-1.3.0.jar'
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 17.017 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2


Build failed in Jenkins: kafka-trunk-jdk7 #981

2016-01-22 Thread Apache Jenkins Server
See 

Changes:

[me] KAFKA-3066: Demo Examples for Kafka Streams

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu3 (Ubuntu ubuntu legacy-ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision c197113a9c04e2f6c2d1a72161c0d40d5804490e 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f c197113a9c04e2f6c2d1a72161c0d40d5804490e
 > git rev-list a19729fe61b23178c6f91135cb81901f76f982f0 # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson5845059196663023199.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 21.371 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson4881152220199172567.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.10/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:streams:examples:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk7:clients:compileJava
:jar_core_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '/home/jenkins/.gradle/caches/modules-2/files-2.1/net.jpountz.lz4/lz4/1.3.0/c708bb2590c0652a642236ef45d9f99ff842a2ce/lz4-1.3.0.jar'
>  to cache fileHashes.bin 
> (/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk7/.gradle/2.10/taskArtifacts/fileHashes.bin).

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 24.23 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51


[jira] [Commented] (KAFKA-2426) A Kafka node tries to connect to itself through its advertised hostname

2016-01-22 Thread Alex Loddengaard (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15113338#comment-15113338
 ] 

Alex Loddengaard commented on KAFKA-2426:
-

Hi [~mcluseau], I've been looking into this and I'm having trouble 
understanding what's going on.

For one, my understanding of docker networking is when 
`--userland-proxy=false`, iptables will DNAT with hairpinning. If my 
understanding is correct, given you're using iptables, I'm surprised the 
connection times out. It's possible kubernetes is changing this behavior 
because kubernetes has the concept of a "pod IP" and creates a separate 
container for the pod network namespace. But the documentation doesn't dive any 
deeper, so I'm not sure.

It would be helpful if you could share your topology and explain how I can 
reproduce the problem. Also, does this problem happen with raw docker? Or just 
with kubernetes?

Lastly, FWIW, in most cases, a Kafka broker won't connect to itself. However, 
the controller will connect to itself using the advertised hostname.

> A Kafka node tries to connect to itself through its advertised hostname
> ---
>
> Key: KAFKA-2426
> URL: https://issues.apache.org/jira/browse/KAFKA-2426
> Project: Kafka
>  Issue Type: Bug
>  Components: network
>Affects Versions: 0.8.2.1
> Environment: Docker https://github.com/wurstmeister/kafka-docker, 
> managed by a Kubernetes cluster, with an "iptables proxy".
>Reporter: Mikaël Cluseau
>Assignee: Jun Rao
>
> Hi,
> when used behind a firewall, Apache Kafka nodes are trying to connect to 
> themselves using their advertised hostnames. This means that if you have a 
> service IP managed by the docker's host using *only* iptables DNAT rules, the 
> node's connection to "itself" times out.
> This is the case in any setup where a host will DNAT the service IP to the 
> instance's IP, and send the packet back on the same interface other a Linux 
> Bridge port not configured in "hairpin" mode. It's because of this: 
> https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/net/bridge/br_forward.c#n30
> The specific part of the kubernetes issue is here: 
> https://github.com/BenTheElder/kubernetes/issues/3#issuecomment-123925060 .
> The timeout involves that the even if partition's leader is elected, it then 
> fails to accept writes from the other members, causing a write lock. and 
> generating very heavy logs (as fast as Kafka usualy is, but through log4j 
> this time ;)).
> This also means that the normal docker case work by going through the 
> userspace-proxy, which necessarily impacts the performance.
> The workaround for us was to add a "127.0.0.2 advertised-hostname" to 
> /etc/hosts in the container startup script.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3066) Add Demo Examples for Kafka Streams

2016-01-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15113322#comment-15113322
 ] 

ASF GitHub Bot commented on KAFKA-3066:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/797


> Add Demo Examples for Kafka Streams
> ---
>
> Key: KAFKA-3066
> URL: https://issues.apache.org/jira/browse/KAFKA-3066
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
> Fix For: 0.9.1.0
>
>
> We need a couple of demo examples for Kafka Streams to illustrate the 
> programmability and functionality of the framework.
> Also extract examples as a separate package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3066) Add Demo Examples for Kafka Streams

2016-01-22 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-3066.
--
   Resolution: Fixed
Fix Version/s: 0.9.1.0

Issue resolved by pull request 797
[https://github.com/apache/kafka/pull/797]

> Add Demo Examples for Kafka Streams
> ---
>
> Key: KAFKA-3066
> URL: https://issues.apache.org/jira/browse/KAFKA-3066
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
> Fix For: 0.9.1.0
>
>
> We need a couple of demo examples for Kafka Streams to illustrate the 
> programmability and functionality of the framework.
> Also extract examples as a separate package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3066: Demo Examples for Kafka Streams

2016-01-22 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/797


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk8 #306

2016-01-22 Thread Apache Jenkins Server
See 

Changes:

[me] MINOR: Fixes version lookup exception.

--
[...truncated 81 lines...]
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/server/KafkaServer.scala:305:
 a pure expression does nothing in statement position; you may be omitting 
necessary parentheses
ControllerStats.leaderElectionTimer
^
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/controller/ControllerChannelManager.scala:389:
 class BrokerEndPoint in object UpdateMetadataRequest is deprecated: see 
corresponding Javadoc for more information.
  new UpdateMetadataRequest.BrokerEndPoint(brokerEndPoint.id, 
brokerEndPoint.host, brokerEndPoint.port)
^
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/controller/ControllerChannelManager.scala:391:
 constructor UpdateMetadataRequest in class UpdateMetadataRequest is 
deprecated: see corresponding Javadoc for more information.
new UpdateMetadataRequest(controllerId, controllerEpoch, 
liveBrokers.asJava, partitionStates.asJava)
^
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/network/BlockingChannel.scala:129:
 method readFromReadableChannel in class NetworkReceive is deprecated: see 
corresponding Javadoc for more information.
  response.readFromReadableChannel(channel)
   ^
there were 15 feature warning(s); re-run with -feature for details
11 warnings found
warning: [options] bootstrap class path not set in conjunction with -source 1.7
1 warning
:kafka-trunk-jdk8:core:processResources UP-TO-DATE
:kafka-trunk-jdk8:core:classes
:kafka-trunk-jdk8:clients:compileTestJavawarning: [options] bootstrap class 
path not set in conjunction with -source 1.7
Note: 
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/clients/src/test/java/org/apache/kafka/common/requests/RequestResponseTest.java
 uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
1 warning

:kafka-trunk-jdk8:clients:processTestResources
:kafka-trunk-jdk8:clients:testClasses
:kafka-trunk-jdk8:core:copyDependantLibs
:kafka-trunk-jdk8:core:copyDependantTestLibs
:kafka-trunk-jdk8:core:jar
:jar_core_2_11
Building project 'core' with Scala version 2.11.7
:kafka-trunk-jdk8:clients:compileJava UP-TO-DATE
:kafka-trunk-jdk8:clients:processResources UP-TO-DATE
:kafka-trunk-jdk8:clients:classes UP-TO-DATE
:kafka-trunk-jdk8:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk8:clients:createVersionFile
:kafka-trunk-jdk8:clients:jar UP-TO-DATE
:kafka-trunk-jdk8:core:compileJava UP-TO-DATE
:kafka-trunk-jdk8:core:compileScalaJava HotSpot(TM) 64-Bit Server VM warning: 
ignoring option MaxPermSize=512m; support was removed in 8.0

/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/api/OffsetCommitRequest.scala:79:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/common/OffsetMetadataAndError.scala:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,

  ^
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/common/OffsetMetadataAndError.scala:37:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 expireTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP) {

  ^
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/coordinator/GroupMetadataManager.scala:395:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
  if (value.expireTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

^
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/server/KafkaApis.scala:293:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
if (offsetAndMetadata.commitTimestamp == 
org

[jira] [Commented] (KAFKA-1696) Kafka should be able to generate Hadoop delegation tokens

2016-01-22 Thread Ashish K Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15113297#comment-15113297
 ] 

Ashish K Singh commented on KAFKA-1696:
---

Thanks [~gwenshap], [~parth.brahmbhatt]! Looking forward to the KIP proposal.

> Kafka should be able to generate Hadoop delegation tokens
> -
>
> Key: KAFKA-1696
> URL: https://issues.apache.org/jira/browse/KAFKA-1696
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Jay Kreps
>Assignee: Parth Brahmbhatt
>
> For access from MapReduce/etc jobs run on behalf of a user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-1696) Kafka should be able to generate Hadoop delegation tokens

2016-01-22 Thread Parth Brahmbhatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Brahmbhatt reassigned KAFKA-1696:
---

Assignee: Parth Brahmbhatt  (was: Sriharsha Chintalapani)

> Kafka should be able to generate Hadoop delegation tokens
> -
>
> Key: KAFKA-1696
> URL: https://issues.apache.org/jira/browse/KAFKA-1696
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Jay Kreps
>Assignee: Parth Brahmbhatt
>
> For access from MapReduce/etc jobs run on behalf of a user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1696) Kafka should be able to generate Hadoop delegation tokens

2016-01-22 Thread Parth Brahmbhatt (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15113295#comment-15113295
 ] 

Parth Brahmbhatt commented on KAFKA-1696:
-

I will assign it to my self and file a KIP 

> Kafka should be able to generate Hadoop delegation tokens
> -
>
> Key: KAFKA-1696
> URL: https://issues.apache.org/jira/browse/KAFKA-1696
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Jay Kreps
>Assignee: Sriharsha Chintalapani
>
> For access from MapReduce/etc jobs run on behalf of a user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1696) Kafka should be able to generate Hadoop delegation tokens

2016-01-22 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15113294#comment-15113294
 ] 

Gwen Shapira commented on KAFKA-1696:
-

I am not working on it, so you guys figure out ownership :)

> Kafka should be able to generate Hadoop delegation tokens
> -
>
> Key: KAFKA-1696
> URL: https://issues.apache.org/jira/browse/KAFKA-1696
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Jay Kreps
>Assignee: Sriharsha Chintalapani
>
> For access from MapReduce/etc jobs run on behalf of a user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3061) Get rid of Guava dependency

2016-01-22 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15113293#comment-15113293
 ] 

Ismael Juma commented on KAFKA-3061:


Your suggestion to wait for feedback to see if it's worth investing time on 
this JIRA seems fine to me given the workarounds available.

> Get rid of Guava dependency
> ---
>
> Key: KAFKA-3061
> URL: https://issues.apache.org/jira/browse/KAFKA-3061
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>
> KAFKA-2422 adds Reflections library to KafkaConnect, which depends on Guava.
> Since lots of people want to use Guavas, having it in the framework will lead 
> to conflicts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : kafka-trunk-jdk8 #305

2016-01-22 Thread Apache Jenkins Server
See 



[jira] [Commented] (KAFKA-1696) Kafka should be able to generate Hadoop delegation tokens

2016-01-22 Thread Ashish K Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15113283#comment-15113283
 ] 

Ashish K Singh commented on KAFKA-1696:
---

[~harsha_ch] [~parth.brahmbhatt] [~gwenshap] seems like we have similar needs 
and would like to know if someone is working on this. If required I can pitch 
in.

> Kafka should be able to generate Hadoop delegation tokens
> -
>
> Key: KAFKA-1696
> URL: https://issues.apache.org/jira/browse/KAFKA-1696
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Jay Kreps
>Assignee: Sriharsha Chintalapani
>
> For access from MapReduce/etc jobs run on behalf of a user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #980

2016-01-22 Thread Apache Jenkins Server
See 

Changes:

[me] MINOR: Fixes version lookup exception.

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H10 (Ubuntu ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision a19729fe61b23178c6f91135cb81901f76f982f0 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f a19729fe61b23178c6f91135cb81901f76f982f0
 > git rev-list 21c6cfe50dbe818a392c28f48ce8891f7f99aaf6 # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson8504115115640108658.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 12.525 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson177802908396118811.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.10/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:clean UP-TO-DATE
:clients:clean
:connect:clean UP-TO-DATE
:core:clean
:examples:clean
:log4j-appender:clean
:streams:clean
:tools:clean
:connect:api:clean
:connect:file:clean
:connect:json:clean
:connect:runtime:clean
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk7:clients:compileJavaNote: 

 uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.

:kafka-trunk-jdk7:clients:processResources UP-TO-DATE
:kafka-trunk-jdk7:clients:classes
:kafka-trunk-jdk7:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk7:clients:createVersionFile
:kafka-trunk-jdk7:clients:jar
:kafka-trunk-jdk7:core:compileJava UP-TO-DATE
:kafka-trunk-jdk7:core:compileScala
:79:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^
:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,

  ^
:37:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 expireTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP) {

  ^
:395:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more info

[GitHub] kafka pull request: KAFKA-3140: Fix PatternSyntaxException and han...

2016-01-22 Thread SinghAsDev
GitHub user SinghAsDev opened a pull request:

https://github.com/apache/kafka/pull/805

KAFKA-3140: Fix PatternSyntaxException and hand caused by it in Mirro…

Fix PatternSyntaxException and hand caused by it in MirrorMaker on passing 
invalid java regex string as whitelist

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/SinghAsDev/kafka KAFKA-3140

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/805.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #805


commit 0f120f976ce66e37cb5180df2b024def1cfcd6bb
Author: Ashish Singh 
Date:   2016-01-22T22:22:05Z

KAFKA-3140: Fix PatternSyntaxException and hand caused by it in MirrorMaker 
on passing invalid java regex string as whitelist.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3140) PatternSyntaxException thrown in MM, causes MM to hang

2016-01-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15113231#comment-15113231
 ] 

ASF GitHub Bot commented on KAFKA-3140:
---

GitHub user SinghAsDev opened a pull request:

https://github.com/apache/kafka/pull/805

KAFKA-3140: Fix PatternSyntaxException and hand caused by it in Mirro…

Fix PatternSyntaxException and hand caused by it in MirrorMaker on passing 
invalid java regex string as whitelist

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/SinghAsDev/kafka KAFKA-3140

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/805.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #805


commit 0f120f976ce66e37cb5180df2b024def1cfcd6bb
Author: Ashish Singh 
Date:   2016-01-22T22:22:05Z

KAFKA-3140: Fix PatternSyntaxException and hand caused by it in MirrorMaker 
on passing invalid java regex string as whitelist.




> PatternSyntaxException thrown in MM, causes MM to hang
> --
>
> Key: KAFKA-3140
> URL: https://issues.apache.org/jira/browse/KAFKA-3140
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> On passing an invalid java regex string as whitelist to MM, 
> PatternSyntaxException is thrown and MM hangs. Below is relevant ST.
> {code}
> java.util.regex.PatternSyntaxException: Dangling meta character '*' near 
> index 0
> *
> ^
>   at java.util.regex.Pattern.error(Pattern.java:1955)
>   at java.util.regex.Pattern.sequence(Pattern.java:2123)
>   at java.util.regex.Pattern.expr(Pattern.java:1996)
>   at java.util.regex.Pattern.compile(Pattern.java:1696)
>   at java.util.regex.Pattern.(Pattern.java:1351)
>   at java.util.regex.Pattern.compile(Pattern.java:1028)
>   at 
> kafka.tools.MirrorMaker$MirrorMakerNewConsumer.init(MirrorMaker.scala:521)
>   at kafka.tools.MirrorMaker$MirrorMakerThread.run(MirrorMaker.scala:389)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3140) PatternSyntaxException thrown in MM, causes MM to hang

2016-01-22 Thread Ashish K Singh (JIRA)
Ashish K Singh created KAFKA-3140:
-

 Summary: PatternSyntaxException thrown in MM, causes MM to hang
 Key: KAFKA-3140
 URL: https://issues.apache.org/jira/browse/KAFKA-3140
 Project: Kafka
  Issue Type: Bug
  Components: tools
Reporter: Ashish K Singh
Assignee: Ashish K Singh


On passing an invalid java regex string as whitelist to MM, 
PatternSyntaxException is thrown and MM hangs. Below is relevant ST.

{code}
java.util.regex.PatternSyntaxException: Dangling meta character '*' near index 0
*
^
at java.util.regex.Pattern.error(Pattern.java:1955)
at java.util.regex.Pattern.sequence(Pattern.java:2123)
at java.util.regex.Pattern.expr(Pattern.java:1996)
at java.util.regex.Pattern.compile(Pattern.java:1696)
at java.util.regex.Pattern.(Pattern.java:1351)
at java.util.regex.Pattern.compile(Pattern.java:1028)
at 
kafka.tools.MirrorMaker$MirrorMakerNewConsumer.init(MirrorMaker.scala:521)
at kafka.tools.MirrorMaker$MirrorMakerThread.run(MirrorMaker.scala:389)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: Fixes version lookup exception.

2016-01-22 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/748


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3137) Delete tombstones in log compacted topics may never get removed.

2016-01-22 Thread Jay Kreps (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15113222#comment-15113222
 ] 

Jay Kreps commented on KAFKA-3137:
--

We maintain the modification timestamp from the original file in the cleaned 
file (or at least we are attempting to, if you are seeing different behavior 
maybe there is a bug?). You can see this in LogSegment.lastModified_ and 
LogCleaner.cleanSegments.

> Delete tombstones in log compacted topics may never get removed.
> 
>
> Key: KAFKA-3137
> URL: https://issues.apache.org/jira/browse/KAFKA-3137
> Project: Kafka
>  Issue Type: Bug
>Reporter: James Cheng
>
> I spoke about this with [~junrao]. I haven't tried to reproduce this, but Jun 
> said that it looks like this is possible, so I'm filing it.
> Delete tombstones in log compacted topics are deleted after 
> delete.retention.ms (at the topic level) or log.cleaner.delete.retention.ms 
> (at the broker level).
> However, we don't have per-message timestamps (at least until KIP-32 is 
> implemented). So the timestamp of the log segment file is used as a proxy. 
> However, the modification time of the log segment changes whenever a 
> compaction run happens.
> It's possible then that if log compaction happens very frequently that 
> delete.retention.ms will never be reached. In that case, the delete 
> tombstones would stay around longer than the user expected. 
> I believe that means that log compaction would have to happen more frequently 
> than delete.retention.ms. The frequency of log compaction is some calculation 
> based on segment size, the criteria for segment roll (time or bytes), the 
> min.cleanable.dirty.ratio, as well as the amount of traffic coming into the 
> log compacted topic. So it's possible, but I'm not sure how likely.
> And I would imagine that this can't be fixed until KIP-32 is available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3138) 0.9.0 docs still say that log compaction doesn't work on compressed topics.

2016-01-22 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15113217#comment-15113217
 ] 

Ismael Juma commented on KAFKA-3138:


Go for it. :)

> 0.9.0 docs still say that log compaction doesn't work on compressed topics.
> ---
>
> Key: KAFKA-3138
> URL: https://issues.apache.org/jira/browse/KAFKA-3138
> Project: Kafka
>  Issue Type: Bug
>Reporter: James Cheng
>
> The 0.9.0 docs say "Log compaction is not yet compatible with compressed 
> topics.". But I believe that was fixed in 0.9.0.
> Is the fix to simply remove that line from the docs? It sounds newbie level. 
> If so, I would like to work on this JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #979

2016-01-22 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-3136: Rename KafkaStreaming to KafkaStreams

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu3 (Ubuntu ubuntu legacy-ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 21c6cfe50dbe818a392c28f48ce8891f7f99aaf6 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 21c6cfe50dbe818a392c28f48ce8891f7f99aaf6
 > git rev-list 91ba074e4a9a012559ce8626c55db199c830cb0d # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson4412190495988665216.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 28.502 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson1567081706275025072.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.10/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk7:clients:compileJava
:jar_core_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '/home/jenkins/.gradle/caches/modules-2/files-2.1/net.jpountz.lz4/lz4/1.3.0/c708bb2590c0652a642236ef45d9f99ff842a2ce/lz4-1.3.0.jar'
>  to cache fileHashes.bin 
> (/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk7/.gradle/2.10/taskArtifacts/fileHashes.bin).

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 25.693 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51


[jira] [Created] (KAFKA-3139) JMX metric ProducerRequestPurgatory doesn't exist, docs seem wrong.

2016-01-22 Thread James Cheng (JIRA)
James Cheng created KAFKA-3139:
--

 Summary: JMX metric ProducerRequestPurgatory doesn't exist, docs 
seem wrong.
 Key: KAFKA-3139
 URL: https://issues.apache.org/jira/browse/KAFKA-3139
 Project: Kafka
  Issue Type: Bug
Reporter: James Cheng


The docs say that there is a JMX metric 
{noformat}
kafka.server:type=ProducerRequestPurgatory,name=PurgatorySize
{noformat}

But that doesn't seem to work. Using jconsole to inspect our kafka broker, it 
seems like the right metric is
{noformat}
kafka.server:type=DelayedOperationPurgatory,name=PurgatorySize,delayedOperation=Produce
{noformat}

And there are also variants of the above for Fetch, Heartbeat, and Rebalance.

Is the fix to simply update the docs? From what I can see, the docs for this 
don't seem auto-generated from code. If it is a simple doc fix, I would like to 
take this JIRA.

Btw, what is NumDelayedOperations, and how is it different from PurgatorySize?
{noformat}
kafka.server:type=DelayedOperationPurgatory,name=NumDelayedOperations,delayedOperation=Produce
{noformat}

And should I also update the docs for that?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3138) 0.9.0 docs still say that log compaction doesn't work on compressed topics.

2016-01-22 Thread James Cheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Cheng updated KAFKA-3138:
---
Description: 
The 0.9.0 docs say "Log compaction is not yet compatible with compressed 
topics.". But I believe that was fixed in 0.9.0.

Is the fix to simply remove that line from the docs? It sounds newbie level. If 
so, I would like to work on this JIRA.

  was:
The 0.9.0 docs say "Log compaction is not yet compatible with compressed 
topics.". But I believe that was fixed in 0.9.0.

Is the fix to simply remove that line from the docs? Its sounds newbie level. 
If so, I would like to work on this JIRA.


> 0.9.0 docs still say that log compaction doesn't work on compressed topics.
> ---
>
> Key: KAFKA-3138
> URL: https://issues.apache.org/jira/browse/KAFKA-3138
> Project: Kafka
>  Issue Type: Bug
>Reporter: James Cheng
>
> The 0.9.0 docs say "Log compaction is not yet compatible with compressed 
> topics.". But I believe that was fixed in 0.9.0.
> Is the fix to simply remove that line from the docs? It sounds newbie level. 
> If so, I would like to work on this JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-3132) URI scheme in "listeners" property should not be case-sensitive

2016-01-22 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15113143#comment-15113143
 ] 

Vahid Hashemian edited comment on KAFKA-3132 at 1/22/16 9:41 PM:
-

When I use the lowercase "plaintext" and start the broker I get this error 
after the parameter printout:

java.lang.IllegalArgumentException: Error creating broker listeners from 
'plaintext://:9092': No enum constant 
org.apache.kafka.common.protocol.SecurityProtocol.plaintext
at 
kafka.server.KafkaConfig.validateUniquePortAndProtocol(KafkaConfig.scala:894)
at kafka.server.KafkaConfig.getListeners(KafkaConfig.scala:913)
at kafka.server.KafkaConfig.(KafkaConfig.scala:866)
at kafka.server.KafkaConfig$.fromProps(KafkaConfig.scala:698)
at kafka.server.KafkaConfig$.fromProps(KafkaConfig.scala:695)
at 
kafka.server.KafkaServerStartable$.fromProps(KafkaServerStartable.scala:28)
at kafka.Kafka$.main(Kafka.scala:58)
at kafka.Kafka.main(Kafka.scala)


was (Author: vahid):
When I use the lowercase "plaintext" and start the broker I get this error 
after the parameter printout:

{{java.lang.IllegalArgumentException: Error creating broker listeners from 
'plaintext://:9092': No enum constant 
org.apache.kafka.common.protocol.SecurityProtocol.plaintext
at 
kafka.server.KafkaConfig.validateUniquePortAndProtocol(KafkaConfig.scala:894)
at kafka.server.KafkaConfig.getListeners(KafkaConfig.scala:913)
at kafka.server.KafkaConfig.(KafkaConfig.scala:866)
at kafka.server.KafkaConfig$.fromProps(KafkaConfig.scala:698)
at kafka.server.KafkaConfig$.fromProps(KafkaConfig.scala:695)
at 
kafka.server.KafkaServerStartable$.fromProps(KafkaServerStartable.scala:28)
at kafka.Kafka$.main(Kafka.scala:58)
at kafka.Kafka.main(Kafka.scala)}}

> URI scheme in "listeners" property should not be case-sensitive
> ---
>
> Key: KAFKA-3132
> URL: https://issues.apache.org/jira/browse/KAFKA-3132
> Project: Kafka
>  Issue Type: Bug
>  Components: config
>Affects Versions: 0.9.0.0
>Reporter: Jake Robb
>Priority: Minor
>  Labels: newbie
>
> I configured my Kafka brokers as follows:
> {{listeners=plaintext://kafka01:9092,ssl://kafka01:9093}}
> With this config, my Kafka brokers start, print out all of the config 
> properties, and exit quietly. No errors, nothing in the log. No indication of 
> a problem whatsoever, let alone the nature of said problem.
> Then, I changed my config as follows:
> {{listeners=PLAINTEXT://kafka01:9092,SSL://kafka01:9093}}
> Now they start and run just fine.
> Per [RFC-3986|https://tools.ietf.org/html/rfc3986#section-6.2.2.1]:
> {quote}
> When a URI uses components of the generic syntax, the component
> syntax equivalence rules always apply; namely, that the scheme and
> host are case-insensitive and therefore should be normalized to
> lowercase.  For example, the URI  is
> equivalent to .
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3138) 0.9.0 docs still say that log compaction doesn't work on compressed topics.

2016-01-22 Thread James Cheng (JIRA)
James Cheng created KAFKA-3138:
--

 Summary: 0.9.0 docs still say that log compaction doesn't work on 
compressed topics.
 Key: KAFKA-3138
 URL: https://issues.apache.org/jira/browse/KAFKA-3138
 Project: Kafka
  Issue Type: Bug
Reporter: James Cheng


The 0.9.0 docs say "Log compaction is not yet compatible with compressed 
topics.". But I believe that was fixed in 0.9.0.

Is the fix to simply remove that line from the docs? Its sounds newbie level. 
If so, I would like to work on this JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3132) URI scheme in "listeners" property should not be case-sensitive

2016-01-22 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15113143#comment-15113143
 ] 

Vahid Hashemian commented on KAFKA-3132:


When I use the lowercase "plaintext" and start the broker I get this error 
after the parameter printout:

{{java.lang.IllegalArgumentException: Error creating broker listeners from 
'plaintext://:9092': No enum constant 
org.apache.kafka.common.protocol.SecurityProtocol.plaintext
at 
kafka.server.KafkaConfig.validateUniquePortAndProtocol(KafkaConfig.scala:894)
at kafka.server.KafkaConfig.getListeners(KafkaConfig.scala:913)
at kafka.server.KafkaConfig.(KafkaConfig.scala:866)
at kafka.server.KafkaConfig$.fromProps(KafkaConfig.scala:698)
at kafka.server.KafkaConfig$.fromProps(KafkaConfig.scala:695)
at 
kafka.server.KafkaServerStartable$.fromProps(KafkaServerStartable.scala:28)
at kafka.Kafka$.main(Kafka.scala:58)
at kafka.Kafka.main(Kafka.scala)}}

> URI scheme in "listeners" property should not be case-sensitive
> ---
>
> Key: KAFKA-3132
> URL: https://issues.apache.org/jira/browse/KAFKA-3132
> Project: Kafka
>  Issue Type: Bug
>  Components: config
>Affects Versions: 0.9.0.0
>Reporter: Jake Robb
>Priority: Minor
>  Labels: newbie
>
> I configured my Kafka brokers as follows:
> {{listeners=plaintext://kafka01:9092,ssl://kafka01:9093}}
> With this config, my Kafka brokers start, print out all of the config 
> properties, and exit quietly. No errors, nothing in the log. No indication of 
> a problem whatsoever, let alone the nature of said problem.
> Then, I changed my config as follows:
> {{listeners=PLAINTEXT://kafka01:9092,SSL://kafka01:9093}}
> Now they start and run just fine.
> Per [RFC-3986|https://tools.ietf.org/html/rfc3986#section-6.2.2.1]:
> {quote}
> When a URI uses components of the generic syntax, the component
> syntax equivalence rules always apply; namely, that the scheme and
> host are case-insensitive and therefore should be normalized to
> lowercase.  For example, the URI  is
> equivalent to .
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3137) Delete tombstones in log compacted topics may never get removed.

2016-01-22 Thread James Cheng (JIRA)
James Cheng created KAFKA-3137:
--

 Summary: Delete tombstones in log compacted topics may never get 
removed.
 Key: KAFKA-3137
 URL: https://issues.apache.org/jira/browse/KAFKA-3137
 Project: Kafka
  Issue Type: Bug
Reporter: James Cheng


I spoke about this with [~junrao]. I haven't tried to reproduce this, but Jun 
said that it looks like this is possible, so I'm filing it.

Delete tombstones in log compacted topics are deleted after delete.retention.ms 
(at the topic level) or log.cleaner.delete.retention.ms (at the broker level).

However, we don't have per-message timestamps (at least until KIP-32 is 
implemented). So the timestamp of the log segment file is used as a proxy. 
However, the modification time of the log segment changes whenever a compaction 
run happens.

It's possible then that if log compaction happens very frequently that 
delete.retention.ms will never be reached. In that case, the delete tombstones 
would stay around longer than the user expected. 

I believe that means that log compaction would have to happen more frequently 
than delete.retention.ms. The frequency of log compaction is some calculation 
based on segment size, the criteria for segment roll (time or bytes), the 
min.cleanable.dirty.ratio, as well as the amount of traffic coming into the log 
compacted topic. So it's possible, but I'm not sure how likely.

And I would imagine that this can't be fixed until KIP-32 is available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-1796) Sanity check partition command line tools

2016-01-22 Thread Mayuresh Gharat (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mayuresh Gharat resolved KAFKA-1796.

Resolution: Not A Problem

> Sanity check partition command line tools
> -
>
> Key: KAFKA-1796
> URL: https://issues.apache.org/jira/browse/KAFKA-1796
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Mayuresh Gharat
>  Labels: newbie
>
> We need to sanity check the input json has the valid values before triggering 
> the admin process. For example, we have seen a scenario where the json input 
> for partition reassignment tools have partition replica info as {broker-1, 
> broker-1, broker-2} and it is still accepted in ZK and eventually lead to 
> under replicated count, etc. This is partially because we use a Map rather 
> than a Set reading the json input for this case; but in general we need to 
> make sure the input parameters like Json  needs to be valid before writing it 
> to ZK.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1860) File system errors are not detected unless Kafka tries to write

2016-01-22 Thread Mayuresh Gharat (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15113128#comment-15113128
 ] 

Mayuresh Gharat commented on KAFKA-1860:


[~guozhang] can you take another look at the PR? I have included most of the 
comments on the PR. 

> File system errors are not detected unless Kafka tries to write
> ---
>
> Key: KAFKA-1860
> URL: https://issues.apache.org/jira/browse/KAFKA-1860
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Mayuresh Gharat
> Fix For: 0.10.0.0
>
> Attachments: KAFKA-1860.patch
>
>
> When the disk (raid with caches dir) dies on a Kafka broker, typically the 
> filesystem gets mounted into read-only mode, and hence when Kafka tries to 
> read the disk, they'll get a FileNotFoundException with the read-only errno 
> set (EROFS).
> However, as long as there is no produce request received, hence no writes 
> attempted on the disks, Kafka will not exit on such FATAL error (when the 
> disk starts working again, Kafka might think some files are gone while they 
> will reappear later as raid comes back online). Instead it keeps spilling 
> exceptions like:
> {code}
> 2015/01/07 09:47:41.543 ERROR [KafkaScheduler] [kafka-scheduler-1] 
> [kafka-server] [] Uncaught exception in scheduled task 
> 'kafka-recovery-point-checkpoint'
> java.io.FileNotFoundException: 
> /export/content/kafka/i001_caches/recovery-point-offset-checkpoint.tmp 
> (Read-only file system)
>   at java.io.FileOutputStream.open(Native Method)
>   at java.io.FileOutputStream.(FileOutputStream.java:206)
>   at java.io.FileOutputStream.(FileOutputStream.java:156)
>   at kafka.server.OffsetCheckpoint.write(OffsetCheckpoint.scala:37)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3136) Rename KafkaStreaming to KafkaStreams

2016-01-22 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira resolved KAFKA-3136.
-
   Resolution: Fixed
Fix Version/s: 0.9.1.0

Issue resolved by pull request 800
[https://github.com/apache/kafka/pull/800]

> Rename KafkaStreaming to KafkaStreams
> -
>
> Key: KAFKA-3136
> URL: https://issues.apache.org/jira/browse/KAFKA-3136
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
> Fix For: 0.9.1.0
>
>
> To be aligned with the module name. Also change the config / metrics class 
> names accordingly. So that:
> 1. The entry process of Kafka Streams is KafkaStreams, and its corresponding 
> StreamConfig and StreamMetrics.
> 2. The high-level DSL is called KStream, and low-level programming API is 
> called Processor with ProcessorTopology.
> Also merge KeyValue and Entry into a top-level KeyValue class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3136) Rename KafkaStreaming to KafkaStreams

2016-01-22 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-3136:
-
Description: 
To be aligned with the module name. Also change the config / metrics class 
names accordingly. So that:

1. The entry process of Kafka Streams is KafkaStreams, and its corresponding 
StreamConfig and StreamMetrics.
2. The high-level DSL is called KStream, and low-level programming API is 
called Processor with ProcessorTopology.

Also merge KeyValue and Entry into a top-level KeyValue class.

  was:To be aligned with the module name. Also change the config / metrics 
class names accordingly.


> Rename KafkaStreaming to KafkaStreams
> -
>
> Key: KAFKA-3136
> URL: https://issues.apache.org/jira/browse/KAFKA-3136
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>
> To be aligned with the module name. Also change the config / metrics class 
> names accordingly. So that:
> 1. The entry process of Kafka Streams is KafkaStreams, and its corresponding 
> StreamConfig and StreamMetrics.
> 2. The high-level DSL is called KStream, and low-level programming API is 
> called Processor with ProcessorTopology.
> Also merge KeyValue and Entry into a top-level KeyValue class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3136) Rename KafkaStreaming to KafkaStreams

2016-01-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15113056#comment-15113056
 ] 

ASF GitHub Bot commented on KAFKA-3136:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/800


> Rename KafkaStreaming to KafkaStreams
> -
>
> Key: KAFKA-3136
> URL: https://issues.apache.org/jira/browse/KAFKA-3136
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>
> To be aligned with the module name. Also change the config / metrics class 
> names accordingly. So that:
> 1. The entry process of Kafka Streams is KafkaStreams, and its corresponding 
> StreamConfig and StreamMetrics.
> 2. The high-level DSL is called KStream, and low-level programming API is 
> called Processor with ProcessorTopology.
> Also merge KeyValue and Entry into a top-level KeyValue class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3136: Rename KafkaStreaming to KafkaStre...

2016-01-22 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/800


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-3136) Rename KafkaStreaming to KafkaStreams

2016-01-22 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-3136:


 Summary: Rename KafkaStreaming to KafkaStreams
 Key: KAFKA-3136
 URL: https://issues.apache.org/jira/browse/KAFKA-3136
 Project: Kafka
  Issue Type: Sub-task
Reporter: Guozhang Wang
Assignee: Guozhang Wang


To be aligned with the module name. Also change the config / metrics class 
names accordingly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3129) Potential Console Producer/Consumer Issue

2016-01-22 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15113016#comment-15113016
 ] 

Vahid Hashemian commented on KAFKA-3129:


Yes, I'm creating the topic before trying the scenario.
I tried the consumer with {{--from-beginning}} and the consumption that stopped 
at 9864 still did not go beyond that. This tells me there is an issue with the 
producer intermittently not going through all the messages for some reason.

> Potential Console Producer/Consumer Issue
> -
>
> Key: KAFKA-3129
> URL: https://issues.apache.org/jira/browse/KAFKA-3129
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, producer 
>Affects Versions: 0.9.0.0
>Reporter: Vahid Hashemian
>Assignee: Neha Narkhede
>
> I have been running a simple test case in which I have a text file 
> {{messages.txt}} with 1,000,000 lines (lines contain numbers from 1 to 
> 1,000,000 in ascending order). I run the console consumer like this:
> {{$ bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test}}
> Topic {{test}} is on 1 partition with a replication factor of 1.
> Then I run the console producer like this:
> {{$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test < 
> messages.txt}}
> Then the console starts receiving the messages. And about half the times it 
> goes all the way to 1,000,000. But, in other cases, it stops short, usually 
> at 999,735.
> I tried running another console consumer on another machine and both 
> consumers behave the same way. I can't see anything related to this in the 
> logs.
> I also ran the same experiment with a similar file of 10,000 lines, and am 
> getting a similar behavior. When the consumer does not receive all the 10,000 
> messages it usually stops at 9,864.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3068) NetworkClient may connect to a different Kafka cluster than originally configured

2016-01-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15113009#comment-15113009
 ] 

ASF GitHub Bot commented on KAFKA-3068:
---

GitHub user enothereska opened a pull request:

https://github.com/apache/kafka/pull/804

KAFKA-3068: Keep track of bootstrap nodes instead of all nodes ever seen



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/enothereska/kafka kafka-3068

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/804.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #804


commit 32f3bffb2281a03fa6449627c144478a0ce666ad
Author: Eno Thereska 
Date:   2016-01-22T20:36:27Z

Keep track of bootstrap nodes instead of all nodes ever seen




> NetworkClient may connect to a different Kafka cluster than originally 
> configured
> -
>
> Key: KAFKA-3068
> URL: https://issues.apache.org/jira/browse/KAFKA-3068
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Eno Thereska
>
> In https://github.com/apache/kafka/pull/290, we added the logic to cache all 
> brokers (id and ip) that the client has ever seen. If we can't find an 
> available broker from the current Metadata, we will pick a broker that we 
> have ever seen (in NetworkClient.leastLoadedNode()).
> One potential problem this logic can introduce is the following. Suppose that 
> we have a broker with id 1 in a Kafka cluster. A producer client remembers 
> this broker in nodesEverSeen. At some point, we bring down this broker and 
> use the host in a different Kafka cluster. Then, the producer client uses 
> this broker from nodesEverSeen to refresh metadata. It will find the metadata 
> in a different Kafka cluster and start producing data there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3068: Keep track of bootstrap nodes inst...

2016-01-22 Thread enothereska
GitHub user enothereska opened a pull request:

https://github.com/apache/kafka/pull/804

KAFKA-3068: Keep track of bootstrap nodes instead of all nodes ever seen



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/enothereska/kafka kafka-3068

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/804.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #804


commit 32f3bffb2281a03fa6449627c144478a0ce666ad
Author: Eno Thereska 
Date:   2016-01-22T20:36:27Z

Keep track of bootstrap nodes instead of all nodes ever seen




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Pluggable Log Compaction Policy

2016-01-22 Thread Guozhang Wang
Bill,

Sounds good. If you want to drive pushing this feature, you can try to
first submit a KIP proposal:

https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals

This admin command may have some correlations with KIP-4:

https://cwiki.apache.org/confluence/display/KAFKA/KIP-4+-+Command+line+and+centralized+administrative+operations

Guozhang



On Fri, Jan 22, 2016 at 10:58 AM, Bill Warshaw 
wrote:

> A function such as "deleteUpToOffset(TopicPartition tp, long
> minOffsetToRetain)" exposed through AdminUtils would be perfect.  I would
> agree that a one-time admin tool would be a good fit for our use case, as
> long as we can programmatically invoke it.  I realize that isn't completely
> trivial, since AdminUtils just updates Zookeeper metadata.
>
> On Thu, Jan 21, 2016 at 7:35 PM, Guozhang Wang  wrote:
>
> > Bill,
> >
> > For your case since once the log is cleaned up to the given offset
> > watermark (or threshold, whatever the name is), future cleaning with the
> > same watermark will effectively be a no-op, so I feel your scenario will
> be
> > better fit as a one-time admin tool to cleanup the logs rather than
> > customizing the periodic cleaning policy. Does this sound reasonable to
> > you?
> >
> >
> > Guozhang
> >
> >
> > On Wed, Jan 20, 2016 at 7:09 PM, Bill Warshaw 
> > wrote:
> >
> > > For our particular use case, we would need to.  This proposal is really
> > two
> > > separate pieces:  custom log compaction policy, and the ability to set
> > > arbitrary key-value pairs in a Topic configuration.
> > >
> > > I believe that Kafka's current behavior of throwing errors when it
> > > encounters configuration keys that aren't defined is meant to help
> users
> > > not misconfigure their configuration files.  If that is the sole
> > motivation
> > > for it, I would propose adding a property namespace, and allow users to
> > > configure arbitrary properties behind that particular namespace, while
> > > still enforcing strict parsing for all other properties.
> > >
> > > On Wed, Jan 20, 2016 at 9:23 PM, Guozhang Wang 
> > wrote:
> > >
> > > > So do you need to periodically update the key-value pairs to "advance
> > the
> > > > threshold for each topic"?
> > > >
> > > > Guozhang
> > > >
> > > > On Wed, Jan 20, 2016 at 5:51 PM, Bill Warshaw <
> bill.wars...@appian.com
> > >
> > > > wrote:
> > > >
> > > > > Compaction would be performed in the same manner as it is
> currently.
> > > > There
> > > > > is a predicate applied in the "shouldRetainMessage" function in
> > > > LogCleaner;
> > > > > ultimately we just want to be able to swap a custom implementation
> of
> > > > that
> > > > > particular method in.  Nothing else in the compaction codepath
> would
> > > need
> > > > > to change.
> > > > >
> > > > > For advancing the "threshold transaction_id", ideally we would be
> > able
> > > to
> > > > > set arbitrary key-value pairs on the topic configuration.  We have
> > > access
> > > > > to the topic configuration during log compaction, so a custom
> policy
> > > > class
> > > > > would also have access to that config, and could read anything we
> > > stored
> > > > in
> > > > > there.
> > > > >
> > > > > On Wed, Jan 20, 2016 at 8:14 PM, Guozhang Wang  >
> > > > wrote:
> > > > >
> > > > > > Hello Bill,
> > > > > >
> > > > > > Just to clarify your use case, is your "log compaction" executed
> > > > > manually,
> > > > > > or it is triggered periodically like the current log cleaning
> > by-key
> > > > > does?
> > > > > > If it is the latter case, how will you advance the "threshold
> > > > > > transaction_id" each time when it executes?
> > > > > >
> > > > > > Guozhang
> > > > > >
> > > > > >
> > > > > > On Wed, Jan 20, 2016 at 1:50 PM, Bill Warshaw <
> > > bill.wars...@appian.com
> > > > >
> > > > > > wrote:
> > > > > >
> > > > > > > Damian, I appreciate your quick response.
> > > > > > >
> > > > > > > Our transaction_id is incrementing for each transaction, so we
> > will
> > > > > only
> > > > > > > ever have one message in Kafka with a given transaction_id.  We
> > > > thought
> > > > > > > about using a rolling counter that is incremented on each
> > > checkpoint
> > > > as
> > > > > > the
> > > > > > > key, and manually triggering compaction after the checkpoint is
> > > > > complete,
> > > > > > > but our checkpoints are asynchronous.  This means that we would
> > > have
> > > > a
> > > > > > set
> > > > > > > of messages appended to the log after the checkpoint started,
> > with
> > > > > value
> > > > > > of
> > > > > > > the previous key + 1, that would also be compacted down to a
> > single
> > > > > > entry.
> > > > > > >
> > > > > > > Our particular custom policy would delete all messages whose
> key
> > > was
> > > > > less
> > > > > > > than a given transaction_id that we passed in.  I can imagine a
> > > wide
> > > > > > > variety of other custom policies that could be used for
> retention
> > > > based
> > > > > > on
> > > > > > > the key and value of the message.

[jira] [Updated] (KAFKA-3135) Unexpected delay before fetch response transmission

2016-01-22 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-3135:
---
Affects Version/s: 0.9.0.0

> Unexpected delay before fetch response transmission
> ---
>
> Key: KAFKA-3135
> URL: https://issues.apache.org/jira/browse/KAFKA-3135
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>
> From the user list, Krzysztof Ciesielski reports the following:
> {quote}
> Scenario description:
> First, a producer writes 50 elements into a topic
> Then, a consumer starts to read, polling in a loop.
> When "max.partition.fetch.bytes" is set to a relatively small value, each
> "consumer.poll()" returns a batch of messages.
> If this value is left as default, the output tends to look like this:
> Poll returned 13793 elements
> Poll returned 13793 elements
> Poll returned 13793 elements
> Poll returned 13793 elements
> Poll returned 0 elements
> Poll returned 0 elements
> Poll returned 0 elements
> Poll returned 0 elements
> Poll returned 13793 elements
> Poll returned 13793 elements
> As we can see, there are weird "gaps" when poll returns 0 elements for some
> time. What is the reason for that? Maybe there are some good practices
> about setting "max.partition.fetch.bytes" which I don't follow?
> {quote}
> The gist to reproduce this problem is here: 
> https://gist.github.com/kciesielski/054bb4359a318aa17561.
> After some initial investigation, the delay appears to be in the server's 
> networking layer. Basically I see a delay of 5 seconds from the time that 
> Selector.send() is invoked in SocketServer.Processor with the fetch response 
> to the time that the send is completed. Using netstat in the middle of the 
> delay shows the following output:
> {code}
> tcp4   0  0  10.191.0.30.55455  10.191.0.30.9092   ESTABLISHED
> tcp4   0 102400  10.191.0.30.9092   10.191.0.30.55454  ESTABLISHED
> {code}
> From this, it looks like the data reaches the send buffer, but needs to be 
> flushed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3135) Unexpected delay before fetch response transmission

2016-01-22 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-3135:
--

 Summary: Unexpected delay before fetch response transmission
 Key: KAFKA-3135
 URL: https://issues.apache.org/jira/browse/KAFKA-3135
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Gustafson
Assignee: Jason Gustafson


>From the user list, Krzysztof Ciesielski reports the following:

{quote}
Scenario description:
First, a producer writes 50 elements into a topic
Then, a consumer starts to read, polling in a loop.
When "max.partition.fetch.bytes" is set to a relatively small value, each
"consumer.poll()" returns a batch of messages.
If this value is left as default, the output tends to look like this:

Poll returned 13793 elements
Poll returned 13793 elements
Poll returned 13793 elements
Poll returned 13793 elements
Poll returned 0 elements
Poll returned 0 elements
Poll returned 0 elements
Poll returned 0 elements
Poll returned 13793 elements
Poll returned 13793 elements

As we can see, there are weird "gaps" when poll returns 0 elements for some
time. What is the reason for that? Maybe there are some good practices
about setting "max.partition.fetch.bytes" which I don't follow?
{quote}

The gist to reproduce this problem is here: 
https://gist.github.com/kciesielski/054bb4359a318aa17561.

After some initial investigation, the delay appears to be in the server's 
networking layer. Basically I see a delay of 5 seconds from the time that 
Selector.send() is invoked in SocketServer.Processor with the fetch response to 
the time that the send is completed. Using netstat in the middle of the delay 
shows the following output:

{code}
tcp4   0  0  10.191.0.30.55455  10.191.0.30.9092   ESTABLISHED
tcp4   0 102400  10.191.0.30.9092   10.191.0.30.55454  ESTABLISHED
{code}

>From this, it looks like the data reaches the send buffer, but needs to be 
>flushed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3130) Consistent option names for command-line tools

2016-01-22 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15112928#comment-15112928
 ] 

Jiangjie Qin commented on KAFKA-3130:
-

[~ijuma] I don't think anyone's working on it. You can ping [~mwarhaftig] to 
confirm. Also, I think we can leave this ticket open because the KIP does not 
have associated ticket yet.

> Consistent option names for command-line tools
> --
>
> Key: KAFKA-3130
> URL: https://issues.apache.org/jira/browse/KAFKA-3130
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>  Labels: newbie
>
> We should use consistent option names for all of our non-deprecated tools. A 
> couple of obvious issues come to mind:
> 1. Some tools use --broker-list while others use --bootstrap-servers (and 
> some don't support either and it needs to be passed via the properties 
> mechanism)
> 2. Some tools only allow the passing of properties inline in the command 
> invocation while others only allow a file to be used
> I am sure there are more. Part of this JIRA is to do that analysis. Fixing 
> the two issues above would already be a big improvement as they are a common 
> need.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3130) Consistent option names for command-line tools

2016-01-22 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15112918#comment-15112918
 ] 

Gwen Shapira commented on KAFKA-3130:
-

AFAIK, no one is working on it at the moment.

> Consistent option names for command-line tools
> --
>
> Key: KAFKA-3130
> URL: https://issues.apache.org/jira/browse/KAFKA-3130
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>  Labels: newbie
>
> We should use consistent option names for all of our non-deprecated tools. A 
> couple of obvious issues come to mind:
> 1. Some tools use --broker-list while others use --bootstrap-servers (and 
> some don't support either and it needs to be passed via the properties 
> mechanism)
> 2. Some tools only allow the passing of properties inline in the command 
> invocation while others only allow a file to be used
> I am sure there are more. Part of this JIRA is to do that analysis. Fixing 
> the two issues above would already be a big improvement as they are a common 
> need.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3134) Missing required configuration "value.deserializer" when initializing a KafkaConsumer with a valid "valueDeserializer"

2016-01-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15112919#comment-15112919
 ] 

ASF GitHub Bot commented on KAFKA-3134:
---

GitHub user happymap opened a pull request:

https://github.com/apache/kafka/pull/803

KAFKA-3134: Fix missing value.deserializer error during KafkaConsumer…

… initialization

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/happymap/kafka KAFKA-3134

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/803.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #803


commit 1afc67d4d944486dc8fb6361922a7e7a8b573ad5
Author: Yifan Ying 
Date:   2016-01-22T19:24:02Z

KAFKA-3134: Fix missing value.deserializer error during KafkaConsumer 
initialization




> Missing required configuration "value.deserializer" when initializing a 
> KafkaConsumer with a valid "valueDeserializer"
> --
>
> Key: KAFKA-3134
> URL: https://issues.apache.org/jira/browse/KAFKA-3134
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Yifan Ying
>
> I tried to initialize a KafkaConsumer object using with a null 
> keyDeserializer and a non-null valueDeserializer:
> {code}
> public KafkaConsumer(Properties properties, Deserializer keyDeserializer,
>  Deserializer valueDeserializer)
> {code}
> Then I got an exception as follows:
> {code}
> Caused by: org.apache.kafka.common.config.ConfigException: Missing required 
> configuration "value.deserializer" which has no default value.
>   at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:148)
>   at 
> org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:49)
>   at 
> org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:56)
>   at 
> org.apache.kafka.clients.consumer.ConsumerConfig.(ConsumerConfig.java:336)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:518)
>   .
> {code}
> Then I went to ConsumerConfig.java file and found this block of code causing 
> the problem:
> {code}
> public static Map addDeserializerToConfig(Map 
> configs,
>   Deserializer 
> keyDeserializer,
>   Deserializer 
> valueDeserializer) {
> Map newConfigs = new HashMap();
> newConfigs.putAll(configs);
> if (keyDeserializer != null)
> newConfigs.put(KEY_DESERIALIZER_CLASS_CONFIG, 
> keyDeserializer.getClass());
> if (keyDeserializer != null)
> newConfigs.put(VALUE_DESERIALIZER_CLASS_CONFIG, 
> valueDeserializer.getClass());
> return newConfigs;
> }
> public static Properties addDeserializerToConfig(Properties properties,
>  Deserializer 
> keyDeserializer,
>  Deserializer 
> valueDeserializer) {
> Properties newProperties = new Properties();
> newProperties.putAll(properties);
> if (keyDeserializer != null)
> newProperties.put(KEY_DESERIALIZER_CLASS_CONFIG, 
> keyDeserializer.getClass().getName());
> if (keyDeserializer != null)
> newProperties.put(VALUE_DESERIALIZER_CLASS_CONFIG, 
> valueDeserializer.getClass().getName());
> return newProperties;
> }
> {code}
> Instead of checking valueDeserializer, the code checks keyDeserializer every 
> time. So when keyDeserializer is null but valueDeserializer is not, the 
> valueDeserializer property will never get set. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3134: Fix missing value.deserializer err...

2016-01-22 Thread happymap
GitHub user happymap opened a pull request:

https://github.com/apache/kafka/pull/803

KAFKA-3134: Fix missing value.deserializer error during KafkaConsumer…

… initialization

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/happymap/kafka KAFKA-3134

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/803.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #803


commit 1afc67d4d944486dc8fb6361922a7e7a8b573ad5
Author: Yifan Ying 
Date:   2016-01-22T19:24:02Z

KAFKA-3134: Fix missing value.deserializer error during KafkaConsumer 
initialization




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-3130) Consistent option names for command-line tools

2016-01-22 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-3130.

Resolution: Duplicate

Thanks [~gwenshap], I missed that KIP. Do you know if anyone is still planning 
to work on it?

I will close this JIRA in the meantime.

> Consistent option names for command-line tools
> --
>
> Key: KAFKA-3130
> URL: https://issues.apache.org/jira/browse/KAFKA-3130
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>  Labels: newbie
>
> We should use consistent option names for all of our non-deprecated tools. A 
> couple of obvious issues come to mind:
> 1. Some tools use --broker-list while others use --bootstrap-servers (and 
> some don't support either and it needs to be passed via the properties 
> mechanism)
> 2. Some tools only allow the passing of properties inline in the command 
> invocation while others only allow a file to be used
> I am sure there are more. Part of this JIRA is to do that analysis. Fixing 
> the two issues above would already be a big improvement as they are a common 
> need.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2985) Consumer group stuck in rebalancing state

2016-01-22 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15112904#comment-15112904
 ] 

Jason Gustafson commented on KAFKA-2985:


[~fridrik] I ran your test  above a couple dozen times against the 0.9.0 branch 
and cannot reproduce the problem. Can you confirm that you have tried against 
that branch? 

> Consumer group stuck in rebalancing state
> -
>
> Key: KAFKA-2985
> URL: https://issues.apache.org/jira/browse/KAFKA-2985
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.0
> Environment: Kafka 0.9.0.0.
> Kafka Java consumer 0.9.0.0
> 2 Java producers.
> 3 Java consumers using the new consumer API.
> 2 kafka brokers.
>Reporter: Jens Rantil
>Assignee: Jason Gustafson
>
> We've doing some load testing on Kafka. _After_ the load test when our 
> consumers and have two times now seen Kafka become stuck in consumer group 
> rebalancing. This is after all our consumers are done consuming and 
> essentially polling periodically without getting any records.
> The brokers list the consumer group (named "default"), but I can't query the 
> offsets:
> {noformat}
> jrantil@queue-0:/srv/kafka/kafka$ ./bin/kafka-consumer-groups.sh 
> --new-consumer --bootstrap-server localhost:9092 --list
> default
> jrantil@queue-0:/srv/kafka/kafka$ ./bin/kafka-consumer-groups.sh 
> --new-consumer --bootstrap-server localhost:9092 --describe --group 
> default|sort
> Consumer group `default` does not exist or is rebalancing.
> {noformat}
> Retrying to query the offsets for 15 minutes or so still said it was 
> rebalancing. After restarting our first broker, the group immediately started 
> rebalancing. That broker was logging this before restart:
> {noformat}
> [2015-12-12 13:09:48,517] INFO [Group Metadata Manager on Broker 0]: Removed 
> 0 expired offsets in 0 milliseconds. (kafka.coordinator.GroupMetadataManager)
> [2015-12-12 13:10:16,139] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 16 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:10:16,141] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 16 
> (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:10:16,575] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group default with old generation 16 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:11:15,141] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 17 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:11:15,143] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 17 
> (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:11:15,314] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group default with old generation 17 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:12:14,144] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 18 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:12:14,145] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 18 
> (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:12:14,340] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group default with old generation 18 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:13:13,146] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 19 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:13:13,148] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 19 
> (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:13:13,238] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group default with old generation 19 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:14:12,148] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 20 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:14:12,149] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 20 
> (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:14:12,360] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group default with old generation 20 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:15:11,150] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 21 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:15:11,152] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 21 
> (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:15:11,217] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group default with old generation 21 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:16:10,152] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 22 (kafka.coordinator.GroupCoordinator)
> [2

[jira] [Commented] (KAFKA-3130) Consistent option names for command-line tools

2016-01-22 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15112901#comment-15112901
 ] 

Gwen Shapira commented on KAFKA-3130:
-

See KIP-14

> Consistent option names for command-line tools
> --
>
> Key: KAFKA-3130
> URL: https://issues.apache.org/jira/browse/KAFKA-3130
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>  Labels: newbie
>
> We should use consistent option names for all of our non-deprecated tools. A 
> couple of obvious issues come to mind:
> 1. Some tools use --broker-list while others use --bootstrap-servers (and 
> some don't support either and it needs to be passed via the properties 
> mechanism)
> 2. Some tools only allow the passing of properties inline in the command 
> invocation while others only allow a file to be used
> I am sure there are more. Part of this JIRA is to do that analysis. Fixing 
> the two issues above would already be a big improvement as they are a common 
> need.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3134) Missing required configuration "value.deserializer" when initializing a KafkaConsumer with a valid "valueDeserializer"

2016-01-22 Thread Yifan Ying (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yifan Ying updated KAFKA-3134:
--
Description: 
I tried to initialize a KafkaConsumer object using with a null keyDeserializer 
and a non-null valueDeserializer:
{code}
public KafkaConsumer(Properties properties, Deserializer keyDeserializer,
 Deserializer valueDeserializer)
{code}

Then I got an exception as follows:
{code}
Caused by: org.apache.kafka.common.config.ConfigException: Missing required 
configuration "value.deserializer" which has no default value.
at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:148)
at 
org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:49)
at 
org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:56)
at 
org.apache.kafka.clients.consumer.ConsumerConfig.(ConsumerConfig.java:336)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:518)
.
{code}

Then I went to ConsumerConfig.java file and found this block of code causing 
the problem:
{code}
public static Map addDeserializerToConfig(Map 
configs,
  Deserializer 
keyDeserializer,
  Deserializer 
valueDeserializer) {
Map newConfigs = new HashMap();
newConfigs.putAll(configs);
if (keyDeserializer != null)
newConfigs.put(KEY_DESERIALIZER_CLASS_CONFIG, 
keyDeserializer.getClass());
if (keyDeserializer != null)
newConfigs.put(VALUE_DESERIALIZER_CLASS_CONFIG, 
valueDeserializer.getClass());
return newConfigs;
}

public static Properties addDeserializerToConfig(Properties properties,
 Deserializer 
keyDeserializer,
 Deserializer 
valueDeserializer) {
Properties newProperties = new Properties();
newProperties.putAll(properties);
if (keyDeserializer != null)
newProperties.put(KEY_DESERIALIZER_CLASS_CONFIG, 
keyDeserializer.getClass().getName());
if (keyDeserializer != null)
newProperties.put(VALUE_DESERIALIZER_CLASS_CONFIG, 
valueDeserializer.getClass().getName());
return newProperties;
}
{code}

Instead of checking valueDeserializer, the code checks keyDeserializer every 
time. So when keyDeserializer is null but valueDeserializer is not, the 
valueDeserializer property will never get set. 


  was:
I tried to initialize a KafkaConsumer object using with a null keyDeserializer 
and a non-null valueDeserializer:
{code}
public KafkaConsumer(Properties properties, Deserializer keyDeserializer,
 Deserializer valueDeserializer)
{code}

Then I got an exception as follows:
{code}
Caused by: org.apache.kafka.common.config.ConfigException: Missing required 
configuration "value.deserializer" which has no default value.
at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:148)
at 
org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:49)
at 
org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:56)
at 
org.apache.kafka.clients.consumer.ConsumerConfig.(ConsumerConfig.java:336)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:518)
.
{code}

Then I went ConsumerConfig.java file and found this block of code causing the 
problem:
{code}
public static Map addDeserializerToConfig(Map 
configs,
  Deserializer 
keyDeserializer,
  Deserializer 
valueDeserializer) {
Map newConfigs = new HashMap();
newConfigs.putAll(configs);
if (keyDeserializer != null)
newConfigs.put(KEY_DESERIALIZER_CLASS_CONFIG, 
keyDeserializer.getClass());
if (keyDeserializer != null)
newConfigs.put(VALUE_DESERIALIZER_CLASS_CONFIG, 
valueDeserializer.getClass());
return newConfigs;
}

public static Properties addDeserializerToConfig(Properties properties,
 Deserializer 
keyDeserializer,
 Deserializer 
valueDeserializer) {
Properties newProperties = new Properties();
newProperties.putAll(properties);
if (keyDeserializer != null)
newProperties.put(KEY_DESERIALIZER_CLASS_CONFIG, 
keyDeserializer.getClass().getName());
if (keyDeserializer != null)
newProperties.put(VALUE_DESERIALIZER_CLASS_CONFIG, 
valueDeserializer.getClass().getName());
return newProperties;
}
{code}

Instead of checking valueDeserializer, the code checks keyDeserializer eve

[jira] [Updated] (KAFKA-3134) Missing required configuration "value.deserializer" when initializing a KafkaConsumer with a valid "valueDeserializer"

2016-01-22 Thread Yifan Ying (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yifan Ying updated KAFKA-3134:
--
Description: 
I tried to initialize a KafkaConsumer object using with a null keyDeserializer 
and a non-null valueDeserializer:
{code}
public KafkaConsumer(Properties properties, Deserializer keyDeserializer,
 Deserializer valueDeserializer)
{code}

Then I got an exception as follows:
{code}
Caused by: org.apache.kafka.common.config.ConfigException: Missing required 
configuration "value.deserializer" which has no default value.
at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:148)
at 
org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:49)
at 
org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:56)
at 
org.apache.kafka.clients.consumer.ConsumerConfig.(ConsumerConfig.java:336)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:518)
.
{code}

Then I went ConsumerConfig.java file and found this block of code causing the 
problem:
{code}
public static Map addDeserializerToConfig(Map 
configs,
  Deserializer 
keyDeserializer,
  Deserializer 
valueDeserializer) {
Map newConfigs = new HashMap();
newConfigs.putAll(configs);
if (keyDeserializer != null)
newConfigs.put(KEY_DESERIALIZER_CLASS_CONFIG, 
keyDeserializer.getClass());
if (keyDeserializer != null)
newConfigs.put(VALUE_DESERIALIZER_CLASS_CONFIG, 
valueDeserializer.getClass());
return newConfigs;
}

public static Properties addDeserializerToConfig(Properties properties,
 Deserializer 
keyDeserializer,
 Deserializer 
valueDeserializer) {
Properties newProperties = new Properties();
newProperties.putAll(properties);
if (keyDeserializer != null)
newProperties.put(KEY_DESERIALIZER_CLASS_CONFIG, 
keyDeserializer.getClass().getName());
if (keyDeserializer != null)
newProperties.put(VALUE_DESERIALIZER_CLASS_CONFIG, 
valueDeserializer.getClass().getName());
return newProperties;
}
{code}

Instead of checking valueDeserializer, the code checks keyDeserializer every 
time. So when keyDeserializer is null but valueDeserializer is not, the 
valueDeserializer property will never get set. 


  was:
I tried to initialize a KafkaConsumer object using 

{code}
public KafkaConsumer(Properties properties, Deserializer keyDeserializer,
 Deserializer valueDeserializer)
{code}

Then I got an exception as follows:
{code}
Caused by: org.apache.kafka.common.config.ConfigException: Missing required 
configuration "value.deserializer" which has no default value.
at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:148)
at 
org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:49)
at 
org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:56)
at 
org.apache.kafka.clients.consumer.ConsumerConfig.(ConsumerConfig.java:336)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:518)
.
{code}

Then I went ConsumerConfig.java file and found this block of code causing the 
problem:
{code}
public static Map addDeserializerToConfig(Map 
configs,
  Deserializer 
keyDeserializer,
  Deserializer 
valueDeserializer) {
Map newConfigs = new HashMap();
newConfigs.putAll(configs);
if (keyDeserializer != null)
newConfigs.put(KEY_DESERIALIZER_CLASS_CONFIG, 
keyDeserializer.getClass());
if (keyDeserializer != null)
newConfigs.put(VALUE_DESERIALIZER_CLASS_CONFIG, 
valueDeserializer.getClass());
return newConfigs;
}

public static Properties addDeserializerToConfig(Properties properties,
 Deserializer 
keyDeserializer,
 Deserializer 
valueDeserializer) {
Properties newProperties = new Properties();
newProperties.putAll(properties);
if (keyDeserializer != null)
newProperties.put(KEY_DESERIALIZER_CLASS_CONFIG, 
keyDeserializer.getClass().getName());
if (keyDeserializer != null)
newProperties.put(VALUE_DESERIALIZER_CLASS_CONFIG, 
valueDeserializer.getClass().getName());
return newProperties;
}
{code}

Instead of checking valueDeserializer, the code checks keyDeserializer every 
time. So when keyDeserializer is null but valueDeserializer 

[jira] [Created] (KAFKA-3134) Missing required configuration "value.deserializer" when initializing a KafkaConsumer with a valid "valueDeserializer"

2016-01-22 Thread Yifan Ying (JIRA)
Yifan Ying created KAFKA-3134:
-

 Summary: Missing required configuration "value.deserializer" when 
initializing a KafkaConsumer with a valid "valueDeserializer"
 Key: KAFKA-3134
 URL: https://issues.apache.org/jira/browse/KAFKA-3134
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.9.0.0
Reporter: Yifan Ying


I tried to initialize a KafkaConsumer object using 

{code}
public KafkaConsumer(Properties properties, Deserializer keyDeserializer,
 Deserializer valueDeserializer)
{code}

Then I got an exception as follows:
{code}
Caused by: org.apache.kafka.common.config.ConfigException: Missing required 
configuration "value.deserializer" which has no default value.
at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:148)
at 
org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:49)
at 
org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:56)
at 
org.apache.kafka.clients.consumer.ConsumerConfig.(ConsumerConfig.java:336)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:518)
.
{code}

Then I went ConsumerConfig.java file and found this block of code causing the 
problem:
{code}
public static Map addDeserializerToConfig(Map 
configs,
  Deserializer 
keyDeserializer,
  Deserializer 
valueDeserializer) {
Map newConfigs = new HashMap();
newConfigs.putAll(configs);
if (keyDeserializer != null)
newConfigs.put(KEY_DESERIALIZER_CLASS_CONFIG, 
keyDeserializer.getClass());
if (keyDeserializer != null)
newConfigs.put(VALUE_DESERIALIZER_CLASS_CONFIG, 
valueDeserializer.getClass());
return newConfigs;
}

public static Properties addDeserializerToConfig(Properties properties,
 Deserializer 
keyDeserializer,
 Deserializer 
valueDeserializer) {
Properties newProperties = new Properties();
newProperties.putAll(properties);
if (keyDeserializer != null)
newProperties.put(KEY_DESERIALIZER_CLASS_CONFIG, 
keyDeserializer.getClass().getName());
if (keyDeserializer != null)
newProperties.put(VALUE_DESERIALIZER_CLASS_CONFIG, 
valueDeserializer.getClass().getName());
return newProperties;
}
{code}

Instead of checking valueDeserializer, the code checks keyDeserializer every 
time. So when keyDeserializer is null but valueDeserializer is not, the 
valueDeserializer property will never get set. 




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3126) Weird behavior in kafkaController on Controlled shutdowns. The leaderAndIsr in zookeeper is not updated during controlled shutdown.

2016-01-22 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15112883#comment-15112883
 ] 

Jiangjie Qin commented on KAFKA-3126:
-

That is exactly what I said. You see difference in (5) and (6) because 
controller sees Broker A still in ISR for one of the partition, but not in ISR 
of the other.

> Weird behavior in kafkaController on Controlled shutdowns. The leaderAndIsr 
> in zookeeper is not updated during controlled shutdown.
> ---
>
> Key: KAFKA-3126
> URL: https://issues.apache.org/jira/browse/KAFKA-3126
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Mayuresh Gharat
>Assignee: Mayuresh Gharat
>
> Consider Broker B is controller, broker A is undergoing shutdown. 
> 2016/01/14 19:49:22.884 [KafkaController] [Controller B]: Shutting down 
> broker A
> 2016/01/14 19:49:22.918 [ReplicaStateMachine] [Replica state machine on 
> controller B]: Invoking state change to OfflineReplica for replicas 
> [Topic=testTopic1,Partition=1,Replica=A] ---> (1)
> 2016/01/14 19:49:22.930 [KafkaController] [Controller B]: New leader and ISR 
> for partition [testTopic1,1] is {"leader":D,"leader_epoch":1,"isr":[D]} 
> --> (2)
> 2016/01/14 19:49:23.028 [ReplicaStateMachine] [Replica state machine on 
> controller B]: Invoking state change to OfflineReplica for replicas 
> [Topic=testTopic2,Partition=1,Replica=A] ---> (3)
> 2016/01/14 19:49:23.032 [KafkaController] [Controller B]: New leader and ISR 
> for partition [testTopic2,1] is {"leader":C,"leader_epoch":10,"isr":[C]} 
> -> (4)
> 2016/01/14 19:49:23.996 [KafkaController] [Controller B]: Broker failure 
> callback for A
> 2016/01/14 19:49:23.997 [PartitionStateMachine] [Partition state machine on 
> Controller B]: Invoking state change to OfflinePartition for partitions 
> 2016/01/14 19:49:23.998 [ReplicaStateMachine] [Replica state machine on 
> controller B]: Invoking state change to OfflineReplica for replicas 
> [Topic=testTopic2,Partition=0,Replica=A],
> [Topic=__consumer_offsets,Partition=5,Replica=A],
> [Topic=testTopic1,Partition=2,Replica=A],
> [Topic=__consumer_offsets,Partition=96,Replica=A],
> [Topic=testTopic2,Partition=1,Replica=A],
> [Topic=__consumer_offsets,Partition=36,Replica=A],
> [Topic=testTopic1,Partition=4,Replica=A],
> [Topic=__consumer_offsets,Partition=85,Replica=A],
> [Topic=testTopic1,Partition=6,Replica=A],
> [Topic=testTopic1,Partition=1,Replica=A]
> 2016/01/14 19:49:24.029 [KafkaController] [Controller B]: New leader and ISR 
> for partition [testTopic2,1] is {"leader":C,"leader_epoch":11,"isr":[C]} 
> --> (5)
> 2016/01/14 19:49:24.212 [KafkaController] [Controller B]: Cannot remove 
> replica A from ISR of partition [testTopic1,1] since it is not in the ISR. 
> Leader = D ; ISR = List(D) --> (6)
> If after (1) and (2) controller gets rid of the replica A from the ISR in 
> zookeeper for [testTopic1-1] as displayed in 6), why doesn't it do the  same 
> for [testTopic2-1] as per (5)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3068) NetworkClient may connect to a different Kafka cluster than originally configured

2016-01-22 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15112863#comment-15112863
 ] 

Ewen Cheslack-Postava commented on KAFKA-3068:
--

[~enothereska] [~ijuma] Agreed, likely a KIP and separate JIRA. I was raising 
it here to get some feedback about whether it makes sense and see if anybody 
had any ideas for less intrusive solutions (i.e. do we really need that level 
of pluggability or could we get away with something less?). I agree that a 
larger change like this shouldn't block fixing this JIRA.

> NetworkClient may connect to a different Kafka cluster than originally 
> configured
> -
>
> Key: KAFKA-3068
> URL: https://issues.apache.org/jira/browse/KAFKA-3068
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Eno Thereska
>
> In https://github.com/apache/kafka/pull/290, we added the logic to cache all 
> brokers (id and ip) that the client has ever seen. If we can't find an 
> available broker from the current Metadata, we will pick a broker that we 
> have ever seen (in NetworkClient.leastLoadedNode()).
> One potential problem this logic can introduce is the following. Suppose that 
> we have a broker with id 1 in a Kafka cluster. A producer client remembers 
> this broker in nodesEverSeen. At some point, we bring down this broker and 
> use the host in a different Kafka cluster. Then, the producer client uses 
> this broker from nodesEverSeen to refresh metadata. It will find the metadata 
> in a different Kafka cluster and start producing data there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Pluggable Log Compaction Policy

2016-01-22 Thread Bill Warshaw
A function such as "deleteUpToOffset(TopicPartition tp, long
minOffsetToRetain)" exposed through AdminUtils would be perfect.  I would
agree that a one-time admin tool would be a good fit for our use case, as
long as we can programmatically invoke it.  I realize that isn't completely
trivial, since AdminUtils just updates Zookeeper metadata.

On Thu, Jan 21, 2016 at 7:35 PM, Guozhang Wang  wrote:

> Bill,
>
> For your case since once the log is cleaned up to the given offset
> watermark (or threshold, whatever the name is), future cleaning with the
> same watermark will effectively be a no-op, so I feel your scenario will be
> better fit as a one-time admin tool to cleanup the logs rather than
> customizing the periodic cleaning policy. Does this sound reasonable to
> you?
>
>
> Guozhang
>
>
> On Wed, Jan 20, 2016 at 7:09 PM, Bill Warshaw 
> wrote:
>
> > For our particular use case, we would need to.  This proposal is really
> two
> > separate pieces:  custom log compaction policy, and the ability to set
> > arbitrary key-value pairs in a Topic configuration.
> >
> > I believe that Kafka's current behavior of throwing errors when it
> > encounters configuration keys that aren't defined is meant to help users
> > not misconfigure their configuration files.  If that is the sole
> motivation
> > for it, I would propose adding a property namespace, and allow users to
> > configure arbitrary properties behind that particular namespace, while
> > still enforcing strict parsing for all other properties.
> >
> > On Wed, Jan 20, 2016 at 9:23 PM, Guozhang Wang 
> wrote:
> >
> > > So do you need to periodically update the key-value pairs to "advance
> the
> > > threshold for each topic"?
> > >
> > > Guozhang
> > >
> > > On Wed, Jan 20, 2016 at 5:51 PM, Bill Warshaw  >
> > > wrote:
> > >
> > > > Compaction would be performed in the same manner as it is currently.
> > > There
> > > > is a predicate applied in the "shouldRetainMessage" function in
> > > LogCleaner;
> > > > ultimately we just want to be able to swap a custom implementation of
> > > that
> > > > particular method in.  Nothing else in the compaction codepath would
> > need
> > > > to change.
> > > >
> > > > For advancing the "threshold transaction_id", ideally we would be
> able
> > to
> > > > set arbitrary key-value pairs on the topic configuration.  We have
> > access
> > > > to the topic configuration during log compaction, so a custom policy
> > > class
> > > > would also have access to that config, and could read anything we
> > stored
> > > in
> > > > there.
> > > >
> > > > On Wed, Jan 20, 2016 at 8:14 PM, Guozhang Wang 
> > > wrote:
> > > >
> > > > > Hello Bill,
> > > > >
> > > > > Just to clarify your use case, is your "log compaction" executed
> > > > manually,
> > > > > or it is triggered periodically like the current log cleaning
> by-key
> > > > does?
> > > > > If it is the latter case, how will you advance the "threshold
> > > > > transaction_id" each time when it executes?
> > > > >
> > > > > Guozhang
> > > > >
> > > > >
> > > > > On Wed, Jan 20, 2016 at 1:50 PM, Bill Warshaw <
> > bill.wars...@appian.com
> > > >
> > > > > wrote:
> > > > >
> > > > > > Damian, I appreciate your quick response.
> > > > > >
> > > > > > Our transaction_id is incrementing for each transaction, so we
> will
> > > > only
> > > > > > ever have one message in Kafka with a given transaction_id.  We
> > > thought
> > > > > > about using a rolling counter that is incremented on each
> > checkpoint
> > > as
> > > > > the
> > > > > > key, and manually triggering compaction after the checkpoint is
> > > > complete,
> > > > > > but our checkpoints are asynchronous.  This means that we would
> > have
> > > a
> > > > > set
> > > > > > of messages appended to the log after the checkpoint started,
> with
> > > > value
> > > > > of
> > > > > > the previous key + 1, that would also be compacted down to a
> single
> > > > > entry.
> > > > > >
> > > > > > Our particular custom policy would delete all messages whose key
> > was
> > > > less
> > > > > > than a given transaction_id that we passed in.  I can imagine a
> > wide
> > > > > > variety of other custom policies that could be used for retention
> > > based
> > > > > on
> > > > > > the key and value of the message.
> > > > > >
> > > > > > On Wed, Jan 20, 2016 at 1:35 PM, Bill Warshaw <
> > > bill.wars...@appian.com
> > > > >
> > > > > > wrote:
> > > > > >
> > > > > > > Hello,
> > > > > > >
> > > > > > > I'm working on a team that is starting to use Kafka as a
> > > distributed
> > > > > > > transaction log for a set of in-memory databases which can be
> > > > > replicated
> > > > > > > across nodes.  We decided to use Kafka instead of Bookkeeper
> for
> > a
> > > > > > variety
> > > > > > > of reasons, but there are a couple spots where Kafka is not a
> > > perfect
> > > > > > fit.
> > > > > > >
> > > > > > > The biggest issue facing us is deleting old transactions from
> the
> > > log
> > > > > > > after checkpointing the datab

[jira] [Updated] (KAFKA-3132) URI scheme in "listeners" property should not be case-sensitive

2016-01-22 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-3132:

Labels: newbie  (was: )

> URI scheme in "listeners" property should not be case-sensitive
> ---
>
> Key: KAFKA-3132
> URL: https://issues.apache.org/jira/browse/KAFKA-3132
> Project: Kafka
>  Issue Type: Bug
>  Components: config
>Affects Versions: 0.9.0.0
>Reporter: Jake Robb
>Priority: Minor
>  Labels: newbie
>
> I configured my Kafka brokers as follows:
> {{listeners=plaintext://kafka01:9092,ssl://kafka01:9093}}
> With this config, my Kafka brokers start, print out all of the config 
> properties, and exit quietly. No errors, nothing in the log. No indication of 
> a problem whatsoever, let alone the nature of said problem.
> Then, I changed my config as follows:
> {{listeners=PLAINTEXT://kafka01:9092,SSL://kafka01:9093}}
> Now they start and run just fine.
> Per [RFC-3986|https://tools.ietf.org/html/rfc3986#section-6.2.2.1]:
> {quote}
> When a URI uses components of the generic syntax, the component
> syntax equivalence rules always apply; namely, that the scheme and
> host are case-insensitive and therefore should be normalized to
> lowercase.  For example, the URI  is
> equivalent to .
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3133) Add putIfAbsent function to KeyValueStore

2016-01-22 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-3133:


 Summary: Add putIfAbsent function to KeyValueStore
 Key: KAFKA-3133
 URL: https://issues.apache.org/jira/browse/KAFKA-3133
 Project: Kafka
  Issue Type: Sub-task
Reporter: Guozhang Wang


Since a local store will only be accessed by a single stream thread, there is 
no atomicity concerns and hence this API should be easy to add.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3068) NetworkClient may connect to a different Kafka cluster than originally configured

2016-01-22 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15112810#comment-15112810
 ] 

Ismael Juma commented on KAFKA-3068:


[~ewencp], that sounds like a worthwhile improvement, I agree. It's a bigger 
change and requires a KIP though so maybe we can file a separate JIRA for it? 
Jun expressed concerns that the problem in this JIRA could be a security issue 
and a fix for 0.9.0.1 would be desireable. As such, it would be great if we 
could find a simple fix for now and explore a more sophisticated one in the 
future.

> NetworkClient may connect to a different Kafka cluster than originally 
> configured
> -
>
> Key: KAFKA-3068
> URL: https://issues.apache.org/jira/browse/KAFKA-3068
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Eno Thereska
>
> In https://github.com/apache/kafka/pull/290, we added the logic to cache all 
> brokers (id and ip) that the client has ever seen. If we can't find an 
> available broker from the current Metadata, we will pick a broker that we 
> have ever seen (in NetworkClient.leastLoadedNode()).
> One potential problem this logic can introduce is the following. Suppose that 
> we have a broker with id 1 in a Kafka cluster. A producer client remembers 
> this broker in nodesEverSeen. At some point, we bring down this broker and 
> use the host in a different Kafka cluster. Then, the producer client uses 
> this broker from nodesEverSeen to refresh metadata. It will find the metadata 
> in a different Kafka cluster and start producing data there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3068) NetworkClient may connect to a different Kafka cluster than originally configured

2016-01-22 Thread Eno Thereska (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15112812#comment-15112812
 ] 

Eno Thereska commented on KAFKA-3068:
-

[~ewencp]: sounds like a KIP? In the old Kafka we could have connected to ZK, 
but we don't have that option anymore. I wonder if ZK can still be used as a 
proxy for a directory service of sorts? We basically need a directory service 
to find other services in the cluster.

> NetworkClient may connect to a different Kafka cluster than originally 
> configured
> -
>
> Key: KAFKA-3068
> URL: https://issues.apache.org/jira/browse/KAFKA-3068
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Eno Thereska
>
> In https://github.com/apache/kafka/pull/290, we added the logic to cache all 
> brokers (id and ip) that the client has ever seen. If we can't find an 
> available broker from the current Metadata, we will pick a broker that we 
> have ever seen (in NetworkClient.leastLoadedNode()).
> One potential problem this logic can introduce is the following. Suppose that 
> we have a broker with id 1 in a Kafka cluster. A producer client remembers 
> this broker in nodesEverSeen. At some point, we bring down this broker and 
> use the host in a different Kafka cluster. Then, the producer client uses 
> this broker from nodesEverSeen to refresh metadata. It will find the metadata 
> in a different Kafka cluster and start producing data there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3126) Weird behavior in kafkaController on Controlled shutdowns. The leaderAndIsr in zookeeper is not updated during controlled shutdown.

2016-01-22 Thread Mayuresh Gharat (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15112795#comment-15112795
 ] 

Mayuresh Gharat commented on KAFKA-3126:


My confusion as described in this jira was :
(2) and (3) follow the same code path and similarly (5) and (6). So ideally (5) 
and (6) should say the same thing that "Cannot remove replica A from ISR of 
partition.."

> Weird behavior in kafkaController on Controlled shutdowns. The leaderAndIsr 
> in zookeeper is not updated during controlled shutdown.
> ---
>
> Key: KAFKA-3126
> URL: https://issues.apache.org/jira/browse/KAFKA-3126
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Mayuresh Gharat
>Assignee: Mayuresh Gharat
>
> Consider Broker B is controller, broker A is undergoing shutdown. 
> 2016/01/14 19:49:22.884 [KafkaController] [Controller B]: Shutting down 
> broker A
> 2016/01/14 19:49:22.918 [ReplicaStateMachine] [Replica state machine on 
> controller B]: Invoking state change to OfflineReplica for replicas 
> [Topic=testTopic1,Partition=1,Replica=A] ---> (1)
> 2016/01/14 19:49:22.930 [KafkaController] [Controller B]: New leader and ISR 
> for partition [testTopic1,1] is {"leader":D,"leader_epoch":1,"isr":[D]} 
> --> (2)
> 2016/01/14 19:49:23.028 [ReplicaStateMachine] [Replica state machine on 
> controller B]: Invoking state change to OfflineReplica for replicas 
> [Topic=testTopic2,Partition=1,Replica=A] ---> (3)
> 2016/01/14 19:49:23.032 [KafkaController] [Controller B]: New leader and ISR 
> for partition [testTopic2,1] is {"leader":C,"leader_epoch":10,"isr":[C]} 
> -> (4)
> 2016/01/14 19:49:23.996 [KafkaController] [Controller B]: Broker failure 
> callback for A
> 2016/01/14 19:49:23.997 [PartitionStateMachine] [Partition state machine on 
> Controller B]: Invoking state change to OfflinePartition for partitions 
> 2016/01/14 19:49:23.998 [ReplicaStateMachine] [Replica state machine on 
> controller B]: Invoking state change to OfflineReplica for replicas 
> [Topic=testTopic2,Partition=0,Replica=A],
> [Topic=__consumer_offsets,Partition=5,Replica=A],
> [Topic=testTopic1,Partition=2,Replica=A],
> [Topic=__consumer_offsets,Partition=96,Replica=A],
> [Topic=testTopic2,Partition=1,Replica=A],
> [Topic=__consumer_offsets,Partition=36,Replica=A],
> [Topic=testTopic1,Partition=4,Replica=A],
> [Topic=__consumer_offsets,Partition=85,Replica=A],
> [Topic=testTopic1,Partition=6,Replica=A],
> [Topic=testTopic1,Partition=1,Replica=A]
> 2016/01/14 19:49:24.029 [KafkaController] [Controller B]: New leader and ISR 
> for partition [testTopic2,1] is {"leader":C,"leader_epoch":11,"isr":[C]} 
> --> (5)
> 2016/01/14 19:49:24.212 [KafkaController] [Controller B]: Cannot remove 
> replica A from ISR of partition [testTopic1,1] since it is not in the ISR. 
> Leader = D ; ISR = List(D) --> (6)
> If after (1) and (2) controller gets rid of the replica A from the ISR in 
> zookeeper for [testTopic1-1] as displayed in 6), why doesn't it do the  same 
> for [testTopic2-1] as per (5)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3068) NetworkClient may connect to a different Kafka cluster than originally configured

2016-01-22 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15112787#comment-15112787
 ] 

Ewen Cheslack-Postava commented on KAFKA-3068:
--

I'm thinking more about this in combination with KAFKA-3112. In 3112, you may 
get unresolvable nodes over time if your app is running for a long time and 
your bootstrap servers become stale.

The more I think about this, most of these issues seem to ultimately be due to 
the fact that you have to specify a fixed list of bootstrap servers which 
should be valid indefinitely. Assuming you have the right setup, you can work 
around this by, e.g., a VIP/loadbalancer/round robin DNS. But I'm wondering for 
how many people this requires extra work because they are already using some 
other service discovery mechanism? Today, if you want to pull bootstrap data 
from somewhere else, you need to do it at app startup and commit to using that 
fixed set of servers. I'm hesitant to suggest adding more points for extension, 
but wouldn't this be addressed more generally if there was a hook to get a list 
of bootstrap servers and we invoked it at startup, any time we run out of 
options, or fail to connect for too long? The default can just be to use the 
bootstrap servers list, but if you want to grab your list of servers from ZK, 
DNS, Consul, or whatever your system of choice is, you could easily hook those 
in instead and avoid this entire problem.

> NetworkClient may connect to a different Kafka cluster than originally 
> configured
> -
>
> Key: KAFKA-3068
> URL: https://issues.apache.org/jira/browse/KAFKA-3068
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Eno Thereska
>
> In https://github.com/apache/kafka/pull/290, we added the logic to cache all 
> brokers (id and ip) that the client has ever seen. If we can't find an 
> available broker from the current Metadata, we will pick a broker that we 
> have ever seen (in NetworkClient.leastLoadedNode()).
> One potential problem this logic can introduce is the following. Suppose that 
> we have a broker with id 1 in a Kafka cluster. A producer client remembers 
> this broker in nodesEverSeen. At some point, we bring down this broker and 
> use the host in a different Kafka cluster. Then, the producer client uses 
> this broker from nodesEverSeen to refresh metadata. It will find the metadata 
> in a different Kafka cluster and start producing data there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3061) Get rid of Guava dependency

2016-01-22 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15112729#comment-15112729
 ] 

Ewen Cheslack-Postava commented on KAFKA-3061:
--

[~ijuma] I think they could if they could as of 58def1c because if they specify 
the full class name, the reflections library is never invoked since we use it 
to scan for subclasses of Connector. However, if we also want to, e.g., expose 
a REST endpoint for listing available connectors, that will require using 
reflections (and this is useful functionality for validating your connector 
plugin installation, for getting metadata when we expose more of that, etc). On 
the other hand, I guess you could still get by in that case by never invoking 
that REST endpoint if you really needed to avoid Guava.

So strictly speaking you should be fine as long as you allow whatever plugin 
has a conflicting version to provide the jar and avoid a small amount of 
functionality. The real question is whether we think telling people that is a 
reasonable response to incompatibilities... I don't have a strong opinion 
because at this point it's hard for us to know how many connectors might pull 
in Guava via whatever library they are using to connect to the external system. 
This might be a case where it's fine to leave it in with these possible 
limitations if you encounter a conflict, and if we see too many problems we can 
replace it with another library or write that code ourselves.

> Get rid of Guava dependency
> ---
>
> Key: KAFKA-3061
> URL: https://issues.apache.org/jira/browse/KAFKA-3061
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>
> KAFKA-2422 adds Reflections library to KafkaConnect, which depends on Guava.
> Since lots of people want to use Guavas, having it in the framework will lead 
> to conflicts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2066) Replace FetchRequest / FetchResponse with their org.apache.kafka.common.requests equivalents

2016-01-22 Thread David Jacot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15112619#comment-15112619
 ] 

David Jacot commented on KAFKA-2066:


[~becket_qin] That makes sense. I'll take a look at your PR.

Regarding (1), indeed, old producer and consumer stay as they are today.

> Replace FetchRequest / FetchResponse with their 
> org.apache.kafka.common.requests equivalents
> 
>
> Key: KAFKA-2066
> URL: https://issues.apache.org/jira/browse/KAFKA-2066
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Gwen Shapira
>Assignee: David Jacot
>
> Replace FetchRequest / FetchResponse with their 
> org.apache.kafka.common.requests equivalents.
> Note that they can't be completely removed until we deprecate the 
> SimpleConsumer API (and it will require very careful patchwork for the places 
> where core modules actually use the SimpleConsumer API).
> This also requires a solution on how to stream from memory-mapped files 
> (similar to what existing code does with FileMessageSet. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3068) NetworkClient may connect to a different Kafka cluster than originally configured

2016-01-22 Thread Eno Thereska (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15112574#comment-15112574
 ] 

Eno Thereska commented on KAFKA-3068:
-

We'll go with keeping the original bootstrap brokers.

> NetworkClient may connect to a different Kafka cluster than originally 
> configured
> -
>
> Key: KAFKA-3068
> URL: https://issues.apache.org/jira/browse/KAFKA-3068
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Eno Thereska
>
> In https://github.com/apache/kafka/pull/290, we added the logic to cache all 
> brokers (id and ip) that the client has ever seen. If we can't find an 
> available broker from the current Metadata, we will pick a broker that we 
> have ever seen (in NetworkClient.leastLoadedNode()).
> One potential problem this logic can introduce is the following. Suppose that 
> we have a broker with id 1 in a Kafka cluster. A producer client remembers 
> this broker in nodesEverSeen. At some point, we bring down this broker and 
> use the host in a different Kafka cluster. Then, the producer client uses 
> this broker from nodesEverSeen to refresh metadata. It will find the metadata 
> in a different Kafka cluster and start producing data there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3132) URI scheme in "listeners" property should not be case-sensitive

2016-01-22 Thread Jake Robb (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jake Robb updated KAFKA-3132:
-
Summary: URI scheme in "listeners" property should not be case-sensitive  
(was: Protocols in "listeners" property should not be case-sensitive)

> URI scheme in "listeners" property should not be case-sensitive
> ---
>
> Key: KAFKA-3132
> URL: https://issues.apache.org/jira/browse/KAFKA-3132
> Project: Kafka
>  Issue Type: Bug
>  Components: config
>Affects Versions: 0.9.0.0
>Reporter: Jake Robb
>Priority: Minor
>
> I configured my Kafka brokers as follows:
> {{listeners=plaintext://kafka01:9092,ssl://kafka01:9093}}
> With this config, my Kafka brokers start, print out all of the config 
> properties, and exit quietly. No errors, nothing in the log. No indication of 
> a problem whatsoever, let alone the nature of said problem.
> Then, I changed my config as follows:
> {{listeners=PLAINTEXT://kafka01:9092,SSL://kafka01:9093}}
> Now they start and run just fine.
> Per [RFC-3986|https://tools.ietf.org/html/rfc3986#section-6.2.2.1]:
> {quote}
> When a URI uses components of the generic syntax, the component
> syntax equivalence rules always apply; namely, that the scheme and
> host are case-insensitive and therefore should be normalized to
> lowercase.  For example, the URI  is
> equivalent to .
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3132) Protocols in "listeners" property should not be case-sensitive

2016-01-22 Thread Jake Robb (JIRA)
Jake Robb created KAFKA-3132:


 Summary: Protocols in "listeners" property should not be 
case-sensitive
 Key: KAFKA-3132
 URL: https://issues.apache.org/jira/browse/KAFKA-3132
 Project: Kafka
  Issue Type: Bug
  Components: config
Affects Versions: 0.9.0.0
Reporter: Jake Robb
Priority: Minor


I configured my Kafka brokers as follows:

{{listeners=plaintext://kafka01:9092,ssl://kafka01:9093}}

With this config, my Kafka brokers start, print out all of the config 
properties, and exit quietly. No errors, nothing in the log. No indication of a 
problem whatsoever, let alone the nature of said problem.

Then, I changed my config as follows:
{{listeners=PLAINTEXT://kafka01:9092,SSL://kafka01:9093}}

Now they start and run just fine.

Per [RFC-3986|https://tools.ietf.org/html/rfc3986#section-6.2.2.1]:

{quote}
When a URI uses components of the generic syntax, the component
syntax equivalence rules always apply; namely, that the scheme and
host are case-insensitive and therefore should be normalized to
lowercase.  For example, the URI  is
equivalent to .
{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3131) Inappropriate logging level for SSL Problem

2016-01-22 Thread Jake Robb (JIRA)
Jake Robb created KAFKA-3131:


 Summary: Inappropriate logging level for SSL Problem
 Key: KAFKA-3131
 URL: https://issues.apache.org/jira/browse/KAFKA-3131
 Project: Kafka
  Issue Type: Bug
Reporter: Jake Robb
Priority: Minor


I didn't have my truststore set up correctly. The Kafka client JAR waited until 
the connection timed out (60 seconds in my case) and then threw this exception:

{code}
Exception in thread "main" java.util.concurrent.ExecutionException: 
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata 
after 6 ms.
at 
org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.(KafkaProducer.java:706)
at 
org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:453)
at 
org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:339)
{code}

I changed my log level to DEBUG and found this, less than two seconds after 
startup:
{code}
[DEBUG] @ 2016-01-22 10:10:34,095 
[User: ; Server: ; Client: ; URL: ; ChangeGroup: ]
 org.apache.kafka.common.network.Selector  - Connection with kafka02/10.0.0.2 
disconnected 

javax.net.ssl.SSLHandshakeException: General SSLEngine problem
at sun.security.ssl.Handshaker.checkThrown(Handshaker.java:1364)
at 
sun.security.ssl.SSLEngineImpl.checkTaskThrown(SSLEngineImpl.java:529)
at 
sun.security.ssl.SSLEngineImpl.writeAppRecord(SSLEngineImpl.java:1194)
at sun.security.ssl.SSLEngineImpl.wrap(SSLEngineImpl.java:1166)
at javax.net.ssl.SSLEngine.wrap(SSLEngine.java:469)
at 
org.apache.kafka.common.network.SslTransportLayer.handshakeWrap(SslTransportLayer.java:377)
at 
org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:242)
at 
org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:68)
at org.apache.kafka.common.network.Selector.poll(Selector.java:281)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:270)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:216)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:128)
at java.lang.Thread.run(Thread.java:745)
Caused by: javax.net.ssl.SSLHandshakeException: General SSLEngine problem
at sun.security.ssl.Alerts.getSSLException(Alerts.java:192)
at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1708)
at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:303)
at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:295)
at 
sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1369)
at 
sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:156)
at sun.security.ssl.Handshaker.processLoop(Handshaker.java:925)
at sun.security.ssl.Handshaker$1.run(Handshaker.java:865)
at sun.security.ssl.Handshaker$1.run(Handshaker.java:862)
at java.security.AccessController.doPrivileged(Native Method)
at sun.security.ssl.Handshaker$DelegatedTask.run(Handshaker.java:1302)
at 
org.apache.kafka.common.network.SslTransportLayer.runDelegatedTasks(SslTransportLayer.java:335)
at 
org.apache.kafka.common.network.SslTransportLayer.handshakeUnwrap(SslTransportLayer.java:413)
at 
org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:269)
... 6 more
Caused by: sun.security.validator.ValidatorException: PKIX path building 
failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to 
find valid certification path to requested target
at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:387)
at 
sun.security.validator.PKIXValidator.engineValidate(PKIXValidator.java:292)
at sun.security.validator.Validator.validate(Validator.java:260)
at 
sun.security.ssl.X509TrustManagerImpl.validate(X509TrustManagerImpl.java:324)
at 
sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:281)
at 
sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:136)
at 
sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1356)
... 15 more
Caused by: sun.security.provider.certpath.SunCertPathBuilderException: unable 
to find valid certification path to requested target
at 
sun.security.provider.certpath.SunCertPathBuilder.build(SunCertPathBuilder.java:145)
at 
sun.security.provider.certpath.SunCertPathBuilder.engineBuild(SunCertPathBuilder.java:131)
at java.security.cert.CertPathBuilder.build(CertPathBuilder.java:280)
at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:382)
... 21 more
{code}

There are two problems here:
1. The log level should be ERROR if the Kafka producer canno

[jira] [Updated] (KAFKA-3131) Inappropriate logging level for SSL Problem

2016-01-22 Thread Jake Robb (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jake Robb updated KAFKA-3131:
-
Description: 
I didn't have my truststore set up correctly. The Kafka producer waited until 
the connection timed out (60 seconds in my case) and then threw this exception:

{code}
Exception in thread "main" java.util.concurrent.ExecutionException: 
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata 
after 6 ms.
at 
org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.(KafkaProducer.java:706)
at 
org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:453)
at 
org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:339)
{code}

I changed my log level to DEBUG and found this, less than two seconds after 
startup:
{code}
[DEBUG] @ 2016-01-22 10:10:34,095 
[User: ; Server: ; Client: ; URL: ; ChangeGroup: ]
 org.apache.kafka.common.network.Selector  - Connection with kafka02/10.0.0.2 
disconnected 

javax.net.ssl.SSLHandshakeException: General SSLEngine problem
at sun.security.ssl.Handshaker.checkThrown(Handshaker.java:1364)
at 
sun.security.ssl.SSLEngineImpl.checkTaskThrown(SSLEngineImpl.java:529)
at 
sun.security.ssl.SSLEngineImpl.writeAppRecord(SSLEngineImpl.java:1194)
at sun.security.ssl.SSLEngineImpl.wrap(SSLEngineImpl.java:1166)
at javax.net.ssl.SSLEngine.wrap(SSLEngine.java:469)
at 
org.apache.kafka.common.network.SslTransportLayer.handshakeWrap(SslTransportLayer.java:377)
at 
org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:242)
at 
org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:68)
at org.apache.kafka.common.network.Selector.poll(Selector.java:281)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:270)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:216)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:128)
at java.lang.Thread.run(Thread.java:745)
Caused by: javax.net.ssl.SSLHandshakeException: General SSLEngine problem
at sun.security.ssl.Alerts.getSSLException(Alerts.java:192)
at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1708)
at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:303)
at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:295)
at 
sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1369)
at 
sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:156)
at sun.security.ssl.Handshaker.processLoop(Handshaker.java:925)
at sun.security.ssl.Handshaker$1.run(Handshaker.java:865)
at sun.security.ssl.Handshaker$1.run(Handshaker.java:862)
at java.security.AccessController.doPrivileged(Native Method)
at sun.security.ssl.Handshaker$DelegatedTask.run(Handshaker.java:1302)
at 
org.apache.kafka.common.network.SslTransportLayer.runDelegatedTasks(SslTransportLayer.java:335)
at 
org.apache.kafka.common.network.SslTransportLayer.handshakeUnwrap(SslTransportLayer.java:413)
at 
org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:269)
... 6 more
Caused by: sun.security.validator.ValidatorException: PKIX path building 
failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to 
find valid certification path to requested target
at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:387)
at 
sun.security.validator.PKIXValidator.engineValidate(PKIXValidator.java:292)
at sun.security.validator.Validator.validate(Validator.java:260)
at 
sun.security.ssl.X509TrustManagerImpl.validate(X509TrustManagerImpl.java:324)
at 
sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:281)
at 
sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:136)
at 
sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1356)
... 15 more
Caused by: sun.security.provider.certpath.SunCertPathBuilderException: unable 
to find valid certification path to requested target
at 
sun.security.provider.certpath.SunCertPathBuilder.build(SunCertPathBuilder.java:145)
at 
sun.security.provider.certpath.SunCertPathBuilder.engineBuild(SunCertPathBuilder.java:131)
at java.security.cert.CertPathBuilder.build(CertPathBuilder.java:280)
at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:382)
... 21 more
{code}

There are two problems here:
1. The log level should be ERROR if the Kafka producer cannot connect because 
of an SSL handshake problem, as this error is not likely to be intermittent or 
recoverable without intervention, and the throw

[jira] [Assigned] (KAFKA-3131) Inappropriate logging level for SSL Problem

2016-01-22 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani reassigned KAFKA-3131:
-

Assignee: Sriharsha Chintalapani

> Inappropriate logging level for SSL Problem
> ---
>
> Key: KAFKA-3131
> URL: https://issues.apache.org/jira/browse/KAFKA-3131
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: Jake Robb
>Assignee: Sriharsha Chintalapani
>Priority: Minor
> Attachments: kafka-ssl-error-debug-log.txt
>
>
> I didn't have my truststore set up correctly. The Kafka producer waited until 
> the connection timed out (60 seconds in my case) and then threw this 
> exception:
> {code}
> Exception in thread "main" java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.TimeoutException: Failed to update metadata 
> after 6 ms.
>   at 
> org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.(KafkaProducer.java:706)
>   at 
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:453)
>   at 
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:339)
> {code}
> I changed my log level to DEBUG and found this, less than two seconds after 
> startup:
> {code}
> [DEBUG] @ 2016-01-22 10:10:34,095 
> [User: ; Server: ; Client: ; URL: ; ChangeGroup: ]
>  org.apache.kafka.common.network.Selector  - Connection with kafka02/10.0.0.2 
> disconnected 
> javax.net.ssl.SSLHandshakeException: General SSLEngine problem
>   at sun.security.ssl.Handshaker.checkThrown(Handshaker.java:1364)
>   at 
> sun.security.ssl.SSLEngineImpl.checkTaskThrown(SSLEngineImpl.java:529)
>   at 
> sun.security.ssl.SSLEngineImpl.writeAppRecord(SSLEngineImpl.java:1194)
>   at sun.security.ssl.SSLEngineImpl.wrap(SSLEngineImpl.java:1166)
>   at javax.net.ssl.SSLEngine.wrap(SSLEngine.java:469)
>   at 
> org.apache.kafka.common.network.SslTransportLayer.handshakeWrap(SslTransportLayer.java:377)
>   at 
> org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:242)
>   at 
> org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:68)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:281)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:270)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:216)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:128)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: javax.net.ssl.SSLHandshakeException: General SSLEngine problem
>   at sun.security.ssl.Alerts.getSSLException(Alerts.java:192)
>   at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1708)
>   at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:303)
>   at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:295)
>   at 
> sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1369)
>   at 
> sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:156)
>   at sun.security.ssl.Handshaker.processLoop(Handshaker.java:925)
>   at sun.security.ssl.Handshaker$1.run(Handshaker.java:865)
>   at sun.security.ssl.Handshaker$1.run(Handshaker.java:862)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at sun.security.ssl.Handshaker$DelegatedTask.run(Handshaker.java:1302)
>   at 
> org.apache.kafka.common.network.SslTransportLayer.runDelegatedTasks(SslTransportLayer.java:335)
>   at 
> org.apache.kafka.common.network.SslTransportLayer.handshakeUnwrap(SslTransportLayer.java:413)
>   at 
> org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:269)
>   ... 6 more
> Caused by: sun.security.validator.ValidatorException: PKIX path building 
> failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to 
> find valid certification path to requested target
>   at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:387)
>   at 
> sun.security.validator.PKIXValidator.engineValidate(PKIXValidator.java:292)
>   at sun.security.validator.Validator.validate(Validator.java:260)
>   at 
> sun.security.ssl.X509TrustManagerImpl.validate(X509TrustManagerImpl.java:324)
>   at 
> sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:281)
>   at 
> sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:136)
>   at 
> sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1356)
>   ... 15 more
> Caused by: sun.security.provider.certpath.SunCertPathBuilderException: unable 
> to find valid certification path to requested target
>   at 
> sun.security.provide

[jira] [Updated] (KAFKA-3131) Inappropriate logging level for SSL Problem

2016-01-22 Thread Jake Robb (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jake Robb updated KAFKA-3131:
-
Attachment: kafka-ssl-error-debug-log.txt

> Inappropriate logging level for SSL Problem
> ---
>
> Key: KAFKA-3131
> URL: https://issues.apache.org/jira/browse/KAFKA-3131
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: Jake Robb
>Priority: Minor
> Attachments: kafka-ssl-error-debug-log.txt
>
>
> I didn't have my truststore set up correctly. The Kafka producer waited until 
> the connection timed out (60 seconds in my case) and then threw this 
> exception:
> {code}
> Exception in thread "main" java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.TimeoutException: Failed to update metadata 
> after 6 ms.
>   at 
> org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.(KafkaProducer.java:706)
>   at 
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:453)
>   at 
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:339)
> {code}
> I changed my log level to DEBUG and found this, less than two seconds after 
> startup:
> {code}
> [DEBUG] @ 2016-01-22 10:10:34,095 
> [User: ; Server: ; Client: ; URL: ; ChangeGroup: ]
>  org.apache.kafka.common.network.Selector  - Connection with kafka02/10.0.0.2 
> disconnected 
> javax.net.ssl.SSLHandshakeException: General SSLEngine problem
>   at sun.security.ssl.Handshaker.checkThrown(Handshaker.java:1364)
>   at 
> sun.security.ssl.SSLEngineImpl.checkTaskThrown(SSLEngineImpl.java:529)
>   at 
> sun.security.ssl.SSLEngineImpl.writeAppRecord(SSLEngineImpl.java:1194)
>   at sun.security.ssl.SSLEngineImpl.wrap(SSLEngineImpl.java:1166)
>   at javax.net.ssl.SSLEngine.wrap(SSLEngine.java:469)
>   at 
> org.apache.kafka.common.network.SslTransportLayer.handshakeWrap(SslTransportLayer.java:377)
>   at 
> org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:242)
>   at 
> org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:68)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:281)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:270)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:216)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:128)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: javax.net.ssl.SSLHandshakeException: General SSLEngine problem
>   at sun.security.ssl.Alerts.getSSLException(Alerts.java:192)
>   at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1708)
>   at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:303)
>   at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:295)
>   at 
> sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1369)
>   at 
> sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:156)
>   at sun.security.ssl.Handshaker.processLoop(Handshaker.java:925)
>   at sun.security.ssl.Handshaker$1.run(Handshaker.java:865)
>   at sun.security.ssl.Handshaker$1.run(Handshaker.java:862)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at sun.security.ssl.Handshaker$DelegatedTask.run(Handshaker.java:1302)
>   at 
> org.apache.kafka.common.network.SslTransportLayer.runDelegatedTasks(SslTransportLayer.java:335)
>   at 
> org.apache.kafka.common.network.SslTransportLayer.handshakeUnwrap(SslTransportLayer.java:413)
>   at 
> org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:269)
>   ... 6 more
> Caused by: sun.security.validator.ValidatorException: PKIX path building 
> failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to 
> find valid certification path to requested target
>   at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:387)
>   at 
> sun.security.validator.PKIXValidator.engineValidate(PKIXValidator.java:292)
>   at sun.security.validator.Validator.validate(Validator.java:260)
>   at 
> sun.security.ssl.X509TrustManagerImpl.validate(X509TrustManagerImpl.java:324)
>   at 
> sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:281)
>   at 
> sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:136)
>   at 
> sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1356)
>   ... 15 more
> Caused by: sun.security.provider.certpath.SunCertPathBuilderException: unable 
> to find valid certification path to requested target
>   at 
> sun.security.provider.certpath.SunCertPathBuilder.build(SunCertPathBuilder.java:145)
>

[jira] [Updated] (KAFKA-3131) Inappropriate logging level for SSL Problem

2016-01-22 Thread Jake Robb (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jake Robb updated KAFKA-3131:
-
Description: 
I didn't have my truststore set up correctly. The Kafka producer waited until 
the connection timed out (60 seconds in my case) and then threw this exception:

{code}
Exception in thread "main" java.util.concurrent.ExecutionException: 
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata 
after 6 ms.
at 
org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.(KafkaProducer.java:706)
at 
org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:453)
at 
org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:339)
{code}

I changed my log level to DEBUG and found this, less than two seconds after 
startup:
{code}
[DEBUG] @ 2016-01-22 10:10:34,095 
[User: ; Server: ; Client: ; URL: ; ChangeGroup: ]
 org.apache.kafka.common.network.Selector  - Connection with kafka02/10.0.0.2 
disconnected 

javax.net.ssl.SSLHandshakeException: General SSLEngine problem
at sun.security.ssl.Handshaker.checkThrown(Handshaker.java:1364)
at 
sun.security.ssl.SSLEngineImpl.checkTaskThrown(SSLEngineImpl.java:529)
at 
sun.security.ssl.SSLEngineImpl.writeAppRecord(SSLEngineImpl.java:1194)
at sun.security.ssl.SSLEngineImpl.wrap(SSLEngineImpl.java:1166)
at javax.net.ssl.SSLEngine.wrap(SSLEngine.java:469)
at 
org.apache.kafka.common.network.SslTransportLayer.handshakeWrap(SslTransportLayer.java:377)
at 
org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:242)
at 
org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:68)
at org.apache.kafka.common.network.Selector.poll(Selector.java:281)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:270)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:216)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:128)
at java.lang.Thread.run(Thread.java:745)
Caused by: javax.net.ssl.SSLHandshakeException: General SSLEngine problem
at sun.security.ssl.Alerts.getSSLException(Alerts.java:192)
at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1708)
at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:303)
at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:295)
at 
sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1369)
at 
sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:156)
at sun.security.ssl.Handshaker.processLoop(Handshaker.java:925)
at sun.security.ssl.Handshaker$1.run(Handshaker.java:865)
at sun.security.ssl.Handshaker$1.run(Handshaker.java:862)
at java.security.AccessController.doPrivileged(Native Method)
at sun.security.ssl.Handshaker$DelegatedTask.run(Handshaker.java:1302)
at 
org.apache.kafka.common.network.SslTransportLayer.runDelegatedTasks(SslTransportLayer.java:335)
at 
org.apache.kafka.common.network.SslTransportLayer.handshakeUnwrap(SslTransportLayer.java:413)
at 
org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:269)
... 6 more
Caused by: sun.security.validator.ValidatorException: PKIX path building 
failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to 
find valid certification path to requested target
at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:387)
at 
sun.security.validator.PKIXValidator.engineValidate(PKIXValidator.java:292)
at sun.security.validator.Validator.validate(Validator.java:260)
at 
sun.security.ssl.X509TrustManagerImpl.validate(X509TrustManagerImpl.java:324)
at 
sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:281)
at 
sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:136)
at 
sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1356)
... 15 more
Caused by: sun.security.provider.certpath.SunCertPathBuilderException: unable 
to find valid certification path to requested target
at 
sun.security.provider.certpath.SunCertPathBuilder.build(SunCertPathBuilder.java:145)
at 
sun.security.provider.certpath.SunCertPathBuilder.engineBuild(SunCertPathBuilder.java:131)
at java.security.cert.CertPathBuilder.build(CertPathBuilder.java:280)
at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:382)
... 21 more
{code}

There are two problems here:
1. The log level should be ERROR if the Kafka producer cannot connect because 
of an SSL handshake problem, as this error is not likely to be intermittent or 
recoverable without intervention. 
2. Ideally, 

[jira] [Updated] (KAFKA-3131) Inappropriate logging level for SSL Problem

2016-01-22 Thread Jake Robb (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jake Robb updated KAFKA-3131:
-
Description: 
I didn't have my truststore set up correctly. The Kafka producer waited until 
the connection timed out (60 seconds in my case) and then threw this exception:

{code}
Exception in thread "main" java.util.concurrent.ExecutionException: 
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata 
after 6 ms.
at 
org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.(KafkaProducer.java:706)
at 
org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:453)
at 
org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:339)
{code}

I changed my log level to DEBUG and found this, less than two seconds after 
startup:
{code}
[DEBUG] @ 2016-01-22 10:10:34,095 
[User: ; Server: ; Client: ; URL: ; ChangeGroup: ]
 org.apache.kafka.common.network.Selector  - Connection with kafka02/10.0.0.2 
disconnected 

javax.net.ssl.SSLHandshakeException: General SSLEngine problem
at sun.security.ssl.Handshaker.checkThrown(Handshaker.java:1364)
at 
sun.security.ssl.SSLEngineImpl.checkTaskThrown(SSLEngineImpl.java:529)
at 
sun.security.ssl.SSLEngineImpl.writeAppRecord(SSLEngineImpl.java:1194)
at sun.security.ssl.SSLEngineImpl.wrap(SSLEngineImpl.java:1166)
at javax.net.ssl.SSLEngine.wrap(SSLEngine.java:469)
at 
org.apache.kafka.common.network.SslTransportLayer.handshakeWrap(SslTransportLayer.java:377)
at 
org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:242)
at 
org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:68)
at org.apache.kafka.common.network.Selector.poll(Selector.java:281)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:270)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:216)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:128)
at java.lang.Thread.run(Thread.java:745)
Caused by: javax.net.ssl.SSLHandshakeException: General SSLEngine problem
at sun.security.ssl.Alerts.getSSLException(Alerts.java:192)
at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1708)
at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:303)
at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:295)
at 
sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1369)
at 
sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:156)
at sun.security.ssl.Handshaker.processLoop(Handshaker.java:925)
at sun.security.ssl.Handshaker$1.run(Handshaker.java:865)
at sun.security.ssl.Handshaker$1.run(Handshaker.java:862)
at java.security.AccessController.doPrivileged(Native Method)
at sun.security.ssl.Handshaker$DelegatedTask.run(Handshaker.java:1302)
at 
org.apache.kafka.common.network.SslTransportLayer.runDelegatedTasks(SslTransportLayer.java:335)
at 
org.apache.kafka.common.network.SslTransportLayer.handshakeUnwrap(SslTransportLayer.java:413)
at 
org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:269)
... 6 more
Caused by: sun.security.validator.ValidatorException: PKIX path building 
failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to 
find valid certification path to requested target
at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:387)
at 
sun.security.validator.PKIXValidator.engineValidate(PKIXValidator.java:292)
at sun.security.validator.Validator.validate(Validator.java:260)
at 
sun.security.ssl.X509TrustManagerImpl.validate(X509TrustManagerImpl.java:324)
at 
sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:281)
at 
sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:136)
at 
sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1356)
... 15 more
Caused by: sun.security.provider.certpath.SunCertPathBuilderException: unable 
to find valid certification path to requested target
at 
sun.security.provider.certpath.SunCertPathBuilder.build(SunCertPathBuilder.java:145)
at 
sun.security.provider.certpath.SunCertPathBuilder.engineBuild(SunCertPathBuilder.java:131)
at java.security.cert.CertPathBuilder.build(CertPathBuilder.java:280)
at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:382)
... 21 more
{code}

There are two problems here:
1. The log level should be ERROR if the Kafka producer cannot connect because 
of an SSL handshake problem, as this error is not likely to be intermittent or 
recoverable without intervention. 
2. Ideally, 

[jira] [Updated] (KAFKA-3131) Inappropriate logging level for SSL Problem

2016-01-22 Thread Jake Robb (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jake Robb updated KAFKA-3131:
-
Component/s: clients

> Inappropriate logging level for SSL Problem
> ---
>
> Key: KAFKA-3131
> URL: https://issues.apache.org/jira/browse/KAFKA-3131
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: Jake Robb
>Priority: Minor
>
> I didn't have my truststore set up correctly. The Kafka producer waited until 
> the connection timed out (60 seconds in my case) and then threw this 
> exception:
> {code}
> Exception in thread "main" java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.TimeoutException: Failed to update metadata 
> after 6 ms.
>   at 
> org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.(KafkaProducer.java:706)
>   at 
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:453)
>   at 
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:339)
> {code}
> I changed my log level to DEBUG and found this, less than two seconds after 
> startup:
> {code}
> [DEBUG] @ 2016-01-22 10:10:34,095 
> [User: ; Server: ; Client: ; URL: ; ChangeGroup: ]
>  org.apache.kafka.common.network.Selector  - Connection with kafka02/10.0.0.2 
> disconnected 
> javax.net.ssl.SSLHandshakeException: General SSLEngine problem
>   at sun.security.ssl.Handshaker.checkThrown(Handshaker.java:1364)
>   at 
> sun.security.ssl.SSLEngineImpl.checkTaskThrown(SSLEngineImpl.java:529)
>   at 
> sun.security.ssl.SSLEngineImpl.writeAppRecord(SSLEngineImpl.java:1194)
>   at sun.security.ssl.SSLEngineImpl.wrap(SSLEngineImpl.java:1166)
>   at javax.net.ssl.SSLEngine.wrap(SSLEngine.java:469)
>   at 
> org.apache.kafka.common.network.SslTransportLayer.handshakeWrap(SslTransportLayer.java:377)
>   at 
> org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:242)
>   at 
> org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:68)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:281)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:270)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:216)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:128)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: javax.net.ssl.SSLHandshakeException: General SSLEngine problem
>   at sun.security.ssl.Alerts.getSSLException(Alerts.java:192)
>   at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1708)
>   at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:303)
>   at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:295)
>   at 
> sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1369)
>   at 
> sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:156)
>   at sun.security.ssl.Handshaker.processLoop(Handshaker.java:925)
>   at sun.security.ssl.Handshaker$1.run(Handshaker.java:865)
>   at sun.security.ssl.Handshaker$1.run(Handshaker.java:862)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at sun.security.ssl.Handshaker$DelegatedTask.run(Handshaker.java:1302)
>   at 
> org.apache.kafka.common.network.SslTransportLayer.runDelegatedTasks(SslTransportLayer.java:335)
>   at 
> org.apache.kafka.common.network.SslTransportLayer.handshakeUnwrap(SslTransportLayer.java:413)
>   at 
> org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:269)
>   ... 6 more
> Caused by: sun.security.validator.ValidatorException: PKIX path building 
> failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to 
> find valid certification path to requested target
>   at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:387)
>   at 
> sun.security.validator.PKIXValidator.engineValidate(PKIXValidator.java:292)
>   at sun.security.validator.Validator.validate(Validator.java:260)
>   at 
> sun.security.ssl.X509TrustManagerImpl.validate(X509TrustManagerImpl.java:324)
>   at 
> sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:281)
>   at 
> sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:136)
>   at 
> sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1356)
>   ... 15 more
> Caused by: sun.security.provider.certpath.SunCertPathBuilderException: unable 
> to find valid certification path to requested target
>   at 
> sun.security.provider.certpath.SunCertPathBuilder.build(SunCertPathBuilder.java:145)
>   at 
> sun.security.provider.certpath.SunCertPathBuilder.engineBuild(SunCe

[jira] [Commented] (KAFKA-3038) Speeding up partition reassignment after broker failure

2016-01-22 Thread Eno Thereska (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15112507#comment-15112507
 ] 

Eno Thereska commented on KAFKA-3038:
-

Closing initial PR since there is an opportunity to speed up other parts of the 
controller (in addition to failover). It is likely this JIRA will be part of a 
larger story.

> Speeding up partition reassignment after broker failure
> ---
>
> Key: KAFKA-3038
> URL: https://issues.apache.org/jira/browse/KAFKA-3038
> Project: Kafka
>  Issue Type: Improvement
>  Components: controller, core
>Affects Versions: 0.9.0.0
>Reporter: Eno Thereska
>Assignee: Eno Thereska
> Fix For: 0.9.0.0
>
>
> After a broker failure the controller does several writes to Zookeeper for 
> each partition on the failed broker. Writes are done one at a time, in closed 
> loop, which is slow especially under high latency networks. Zookeeper has 
> support for batching operations (the "multi" API). It is expected that 
> substituting serial writes with batched ones should reduce failure handling 
> time by an order of magnitude.
> This is identified as an issue in 
> https://cwiki.apache.org/confluence/display/KAFKA/kafka+Detailed+Replication+Design+V3
>  (section End-to-end latency during a broker failure)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3038) Speeding up partition reassignment after broker failure

2016-01-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15112505#comment-15112505
 ] 

ASF GitHub Bot commented on KAFKA-3038:
---

Github user enothereska closed the pull request at:

https://github.com/apache/kafka/pull/750


> Speeding up partition reassignment after broker failure
> ---
>
> Key: KAFKA-3038
> URL: https://issues.apache.org/jira/browse/KAFKA-3038
> Project: Kafka
>  Issue Type: Improvement
>  Components: controller, core
>Affects Versions: 0.9.0.0
>Reporter: Eno Thereska
>Assignee: Eno Thereska
> Fix For: 0.9.0.0
>
>
> After a broker failure the controller does several writes to Zookeeper for 
> each partition on the failed broker. Writes are done one at a time, in closed 
> loop, which is slow especially under high latency networks. Zookeeper has 
> support for batching operations (the "multi" API). It is expected that 
> substituting serial writes with batched ones should reduce failure handling 
> time by an order of magnitude.
> This is identified as an issue in 
> https://cwiki.apache.org/confluence/display/KAFKA/kafka+Detailed+Replication+Design+V3
>  (section End-to-end latency during a broker failure)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3038: use async ZK calls to speed up lea...

2016-01-22 Thread enothereska
Github user enothereska closed the pull request at:

https://github.com/apache/kafka/pull/750


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2066) Replace FetchRequest / FetchResponse with their org.apache.kafka.common.requests equivalents

2016-01-22 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15112481#comment-15112481
 ] 

Jiangjie Qin commented on KAFKA-2066:
-

[~dajac] I think it would be better if you can hold back writing the patch 
until KIP-31 and KIP-32 get checked in. Otherwise either you or me need to do a 
big rebase and that's going to be a waste of time. Another reason is that after 
KIP-31 and KIP-32 get checked in, changes needed for Records, MemoryRecords and 
FileRecords(to be added) will be very clear because we simply need to make them 
equivalent to MessageSet, ByteBufferMessageSet and FileMessageSet. 

If you really want to start writing the patch now,  I would suggest you to take 
a look at PR#764 to get some ideas about what is needed for (1) and (2).

In terms of (1) we don't need to update the old producer and consumer to use 
Records.


> Replace FetchRequest / FetchResponse with their 
> org.apache.kafka.common.requests equivalents
> 
>
> Key: KAFKA-2066
> URL: https://issues.apache.org/jira/browse/KAFKA-2066
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Gwen Shapira
>Assignee: David Jacot
>
> Replace FetchRequest / FetchResponse with their 
> org.apache.kafka.common.requests equivalents.
> Note that they can't be completely removed until we deprecate the 
> SimpleConsumer API (and it will require very careful patchwork for the places 
> where core modules actually use the SimpleConsumer API).
> This also requires a solution on how to stream from memory-mapped files 
> (similar to what existing code does with FileMessageSet. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1448) Filter-plugins for messages

2016-01-22 Thread Stevo Slavic (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15112349#comment-15112349
 ] 

Stevo Slavic commented on KAFKA-1448:
-

Would be nice to be able as consumer to pass a lambda or something for 
filtering to occur on server side.
One potential use case would be to model multiple different logical topics in 
messages stored in Kafka, on top of smaller number of physical Kafka topics, 
and have filtering to occur on server side. Kafka+ZooKeeper is limited in 
number of physical topics or partitions per cluster, there are reports of ~1M 
being a limit, so this may be one workaround.

> Filter-plugins for messages
> ---
>
> Key: KAFKA-1448
> URL: https://issues.apache.org/jira/browse/KAFKA-1448
> Project: Kafka
>  Issue Type: Wish
>  Components: consumer
>Reporter: Moritz Möller
>Assignee: Neha Narkhede
>
> Hi,
> we use Kafka to transmit different events that occur on different products, 
> and would like to be able to subscribe only to a certain set of events for 
> certain products.
> Using one topic for each event * product combination would yield around 2000 
> topics, which seems to be not what kafka is designed for.
> What we would need is a way to add a consumer filter plugin to kafka (a 
> simple class that can accept or reject a message) and to pass a parameter 
> from the consumer to that filter class.
> Is there a better way to do this already, or if not, would you accept a patch 
> upstream that adds such a mechanism?
> Thanks,
> Mo



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3129) Potential Console Producer/Consumer Issue

2016-01-22 Thread Arun Mahadevan (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15112348#comment-15112348
 ] 

Arun Mahadevan commented on KAFKA-3129:
---

Are you creating the topic beforehand? 
It may have to do with when the console consumer fetches the offset for the 
topic. I think the messages published before the offset is fetched are not 
received by the console consumer.
Can you try running the consumer with --from-beginning ? For me if I pass the 
--for-beginning the console consumer always fetches all the messages but 
without that option sometimes it does not.

> Potential Console Producer/Consumer Issue
> -
>
> Key: KAFKA-3129
> URL: https://issues.apache.org/jira/browse/KAFKA-3129
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, producer 
>Affects Versions: 0.9.0.0
>Reporter: Vahid Hashemian
>Assignee: Neha Narkhede
>
> I have been running a simple test case in which I have a text file 
> {{messages.txt}} with 1,000,000 lines (lines contain numbers from 1 to 
> 1,000,000 in ascending order). I run the console consumer like this:
> {{$ bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test}}
> Topic {{test}} is on 1 partition with a replication factor of 1.
> Then I run the console producer like this:
> {{$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test < 
> messages.txt}}
> Then the console starts receiving the messages. And about half the times it 
> goes all the way to 1,000,000. But, in other cases, it stops short, usually 
> at 999,735.
> I tried running another console consumer on another machine and both 
> consumers behave the same way. I can't see anything related to this in the 
> logs.
> I also ran the same experiment with a similar file of 10,000 lines, and am 
> getting a similar behavior. When the consumer does not receive all the 10,000 
> messages it usually stops at 9,864.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: Fixed undefined method `update_guest'

2016-01-22 Thread szwed
GitHub user szwed opened a pull request:

https://github.com/apache/kafka/pull/802

Fixed undefined method `update_guest'



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/szwed/kafka trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/802.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #802


commit 2b399fbc8853e574b046868fe0c994728aee67f0
Author: Piotr Szwed 
Date:   2016-01-22T12:03:46Z

Fixed undefined method `update_guest'




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2066) Replace FetchRequest / FetchResponse with their org.apache.kafka.common.requests equivalents

2016-01-22 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15112285#comment-15112285
 ] 

Ismael Juma commented on KAFKA-2066:


[~dajac], I suggest you submit a PR for review once 1 and 2 are done. And 3 and 
4 can be done in a subsequent PR (even if it ends up being merged as a single 
PR in the end depending on how the review process goes, it will still be useful 
to review the approach before doing 3 and 4).

> Replace FetchRequest / FetchResponse with their 
> org.apache.kafka.common.requests equivalents
> 
>
> Key: KAFKA-2066
> URL: https://issues.apache.org/jira/browse/KAFKA-2066
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Gwen Shapira
>Assignee: David Jacot
>
> Replace FetchRequest / FetchResponse with their 
> org.apache.kafka.common.requests equivalents.
> Note that they can't be completely removed until we deprecate the 
> SimpleConsumer API (and it will require very careful patchwork for the places 
> where core modules actually use the SimpleConsumer API).
> This also requires a solution on how to stream from memory-mapped files 
> (similar to what existing code does with FileMessageSet. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: KIP for extension of SASL to include additional mechanisms

2016-01-22 Thread Rajini Sivaram
Thank you, Jun.

On Fri, Jan 22, 2016 at 4:26 AM, Jun Rao  wrote:

> Rajini,
>
> Thanks for your interest. I just gave you the permission to Kafka wiki.
>
> Jun
>
> On Thu, Jan 21, 2016 at 5:51 AM, Rajini Sivaram <
> rajinisiva...@googlemail.com> wrote:
>
> > Can I have access to write up a KIP for extending the SASL implementation
> > in Kafka to include more mechanisms? We have the implementation for
> > SASL/PLAIN, but I think it would make sense for the KIP to cover new
> > mechanisms in general.
> >
> > Thank you...
> >
> > Regards,
> >
> > Rajini
> >
>


[jira] [Created] (KAFKA-3130) Consistent option names for command-line tools

2016-01-22 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-3130:
--

 Summary: Consistent option names for command-line tools
 Key: KAFKA-3130
 URL: https://issues.apache.org/jira/browse/KAFKA-3130
 Project: Kafka
  Issue Type: Improvement
  Components: tools
Affects Versions: 0.9.0.0
Reporter: Ismael Juma


We should use consistent option names for all of our non-deprecated tools. A 
couple of obvious issues come to mind:

1. Some tools use --broker-list while others use --bootstrap-servers (and some 
don't support either and it needs to be passed via the properties mechanism)
2. Some tools only allow the passing of properties inline in the command 
invocation while others only allow a file to be used

I am sure there are more. Part of this JIRA is to do that analysis. Fixing the 
two issues above would already be a big improvement as they are a common need.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3006) Make collection default container type for sequences in the consumer API

2016-01-22 Thread Pierre-Yves Ritschard (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15112115#comment-15112115
 ] 

Pierre-Yves Ritschard commented on KAFKA-3006:
--

Hi [~gwenshap], I'm happy to go through the process. I don't seem to have 
authorization to create pages in the Kafka space on confluence though. My 
user-id there is "pyr" 

> Make collection default container type for sequences in the consumer API
> 
>
> Key: KAFKA-3006
> URL: https://issues.apache.org/jira/browse/KAFKA-3006
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Pierre-Yves Ritschard
>  Labels: patch
>
> The KafkaConsumer API has some annoying inconsistencies in the usage of 
> collection types. For example, subscribe() takes a list, but subscription() 
> returns a set. Similarly for assign() and assignment(). We also have pause() 
> , seekToBeginning(), seekToEnd(), and resume() which annoyingly use a 
> variable argument array, which means you have to copy the result of 
> assignment() to an array if you want to pause all assigned partitions. We can 
> solve these issues by adding the following variants:
> {code}
> void subscribe(Collection topics);
> void subscribe(Collection topics, ConsumerRebalanceListener);
> void assign(Collection partitions);
> void pause(Collection partitions);
> void resume(Collection partitions);
> void seekToBeginning(Collection);
> void seekToEnd(Collection);
> {code}
> This issues supersedes KAFKA-2991



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2985) Consumer group stuck in rebalancing state

2016-01-22 Thread Federico Fissore (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15112100#comment-15112100
 ] 

Federico Fissore commented on KAFKA-2985:
-

My test still shows consumer stuck awaiting for rebalancing of 2 brokers with 
no incoming traffic. So no, this issue should stay open, unless another issue 
tracks this odd behaviour or it's a WONTFIX

> Consumer group stuck in rebalancing state
> -
>
> Key: KAFKA-2985
> URL: https://issues.apache.org/jira/browse/KAFKA-2985
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.0
> Environment: Kafka 0.9.0.0.
> Kafka Java consumer 0.9.0.0
> 2 Java producers.
> 3 Java consumers using the new consumer API.
> 2 kafka brokers.
>Reporter: Jens Rantil
>Assignee: Jason Gustafson
>
> We've doing some load testing on Kafka. _After_ the load test when our 
> consumers and have two times now seen Kafka become stuck in consumer group 
> rebalancing. This is after all our consumers are done consuming and 
> essentially polling periodically without getting any records.
> The brokers list the consumer group (named "default"), but I can't query the 
> offsets:
> {noformat}
> jrantil@queue-0:/srv/kafka/kafka$ ./bin/kafka-consumer-groups.sh 
> --new-consumer --bootstrap-server localhost:9092 --list
> default
> jrantil@queue-0:/srv/kafka/kafka$ ./bin/kafka-consumer-groups.sh 
> --new-consumer --bootstrap-server localhost:9092 --describe --group 
> default|sort
> Consumer group `default` does not exist or is rebalancing.
> {noformat}
> Retrying to query the offsets for 15 minutes or so still said it was 
> rebalancing. After restarting our first broker, the group immediately started 
> rebalancing. That broker was logging this before restart:
> {noformat}
> [2015-12-12 13:09:48,517] INFO [Group Metadata Manager on Broker 0]: Removed 
> 0 expired offsets in 0 milliseconds. (kafka.coordinator.GroupMetadataManager)
> [2015-12-12 13:10:16,139] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 16 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:10:16,141] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 16 
> (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:10:16,575] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group default with old generation 16 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:11:15,141] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 17 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:11:15,143] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 17 
> (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:11:15,314] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group default with old generation 17 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:12:14,144] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 18 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:12:14,145] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 18 
> (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:12:14,340] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group default with old generation 18 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:13:13,146] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 19 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:13:13,148] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 19 
> (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:13:13,238] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group default with old generation 19 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:14:12,148] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 20 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:14:12,149] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 20 
> (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:14:12,360] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group default with old generation 20 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:15:11,150] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 21 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:15:11,152] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 21 
> (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:15:11,217] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group default with old generation 21 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:16:10,152] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 22 (kafka.coor