[GitHub] kafka pull request: MINOR: Stabilize transient replication test fa...

2016-02-09 Thread granders
Github user granders closed the pull request at:

https://github.com/apache/kafka/pull/890


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [POLL] Make next Kafka Release 0.10.0.0 instead of 0.9.1.0

2016-02-09 Thread Ismael Juma
Hi Becket,

Thanks for starting the discussion.

Given the significance of the changes, I think 0.10.0.0 is appropriate.

However, I think we have to be careful about the fact that we will be
releasing two major releases within a short period of time. This means that
0.10.0.0 may be out before many have have upgraded to 0.9.0.x. It would be
good to understand the upgrade path for such people. Finally, we should be
very clear about what the version number means in terms of compatibility.
Some people were confused by the significance of the 0.8.3.0 to 0.9.0.0
rename.

Ismael

On Tue, Feb 9, 2016 at 6:07 PM, Becket Qin  wrote:

> Hi All,
>
> Next Kafka release will have several significant important new
> feature/changes such as Kafka Stream, Message Format Change, Client
> Interceptors and several new consumer API changes, etc. We feel it is
> better to make next Kafka release 0.10.0.0 instead of 0.9.1.0.
>
> We would like to see what do people think of making the next release
> 0.10.0.0.
>
> Thanks,
>
> Jiangjie (Becket) Qin
>


Jenkins build is back to normal : kafka-trunk-jdk8 #356

2016-02-09 Thread Apache Jenkins Server
See 



Build failed in Jenkins: kafka-trunk-jdk8 #357

2016-02-09 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-3216: "Modifying topics" section incorrectly says you can't 
change

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H11 (docker Ubuntu ubuntu yahoo-not-h2) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 4fd60c6120dc4330deb7317360c835b04bc6cb9a 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 4fd60c6120dc4330deb7317360c835b04bc6cb9a
 > git rev-list cd15321e0d250253abb990af53e1f5624cf46b42 # timeout=10
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson5399299760878233209.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 1 mins 4.244 secs
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson6566393254713743256.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.10/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:streams:examples:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk8:clients:compileJava
:jar_core_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '/home/jenkins/.gradle/caches/modules-2/files-2.1/net.jpountz.lz4/lz4/1.3.0/c708bb2590c0652a642236ef45d9f99ff842a2ce/lz4-1.3.0.jar'
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 58.802 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2


Build failed in Jenkins: kafka-trunk-jdk7 #1030

2016-02-09 Thread Apache Jenkins Server
See 

Changes:

[me] KAFKA-3217: Close producers in unit tests

[me] HOTFIX: Fix NPE after standby task reassignment

[me] KAFKA-3189: Kafka server returns UnknownServerException for inherited

[cshapi] KAFKA-3216: "Modifying topics" section incorrectly says you can't 
change

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-6 (docker Ubuntu ubuntu yahoo-not-h2) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 4fd60c6120dc4330deb7317360c835b04bc6cb9a 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 4fd60c6120dc4330deb7317360c835b04bc6cb9a
 > git rev-list 9cac38c0216879776b9eab728235e35118e9026e # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson5567531006161643224.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 34.215 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson1290112780523516955.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.10/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:streams:examples:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk7:clients:compileJava
:jar_core_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '/home/jenkins/.gradle/caches/modules-2/files-2.1/net.jpountz.lz4/lz4/1.3.0/c708bb2590c0652a642236ef45d9f99ff842a2ce/lz4-1.3.0.jar'
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 37.45 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51


[jira] [Updated] (KAFKA-3223) Add System (ducktape) Test that asserts strict partition ordering despite node failure

2016-02-09 Thread Ben Stopford (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ben Stopford updated KAFKA-3223:

Description: 
Kafka provides strong ordering when max.inflight.requests.per.connection = 1. 
There currently exists a bug (patched but not released) against this feature. 
It's an important feature for many customers so we should add a test to assert 
that the contract is honoured.

Suggest a similar format to ReassignPartitionsTest 
(reassign_partitions_test.py) or ReplicationTest (replication_test.py). It 
should be simply a case of asserting on the order of the messages (which are 
monotonically increasing numbers in these tests) in each partition with the 
inflight configuration whilst incurring repeated node failure.

Note that this jira is depended on the merge of KAFKA-3197 

  was:
Kafka provides strong ordering when max.inflight.requests.per.connection = 1. 
There currently exists a bug (patched but not released) against this feature. 
It's an important feature for many customers so we should add a test to assert 
that the contract is honoured.

Suggest a similar format to ReassignPartitionsTest 
(reassign_partitions_test.py) or ReplicationTest (replication_test.py). It 
should be simply a case of asserting on the order of the messages (which are 
monotonically increasing numbers in these tests) in each partition with the 
inflight configuration.

Note that this jira is depended on the merge of KAFKA-3197 


> Add System (ducktape) Test that asserts strict partition ordering despite 
> node failure
> --
>
> Key: KAFKA-3223
> URL: https://issues.apache.org/jira/browse/KAFKA-3223
> Project: Kafka
>  Issue Type: Test
>Reporter: Ben Stopford
>
> Kafka provides strong ordering when max.inflight.requests.per.connection = 1. 
> There currently exists a bug (patched but not released) against this feature. 
> It's an important feature for many customers so we should add a test to 
> assert that the contract is honoured.
> Suggest a similar format to ReassignPartitionsTest 
> (reassign_partitions_test.py) or ReplicationTest (replication_test.py). It 
> should be simply a case of asserting on the order of the messages (which are 
> monotonically increasing numbers in these tests) in each partition with the 
> inflight configuration whilst incurring repeated node failure.
> Note that this jira is depended on the merge of KAFKA-3197 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3222) SslProducerSendTest transient failure.

2016-02-09 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-3222.

Resolution: Duplicate

> SslProducerSendTest transient failure.
> --
>
> Key: KAFKA-3222
> URL: https://issues.apache.org/jira/browse/KAFKA-3222
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jiangjie Qin
>
> Saw the following error from time to time.
> {noformat}
> kafka.common.TopicExistsException: Topic "topic" already exists.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3222) SslProducerSendTest transient failure.

2016-02-09 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15138753#comment-15138753
 ] 

Ismael Juma commented on KAFKA-3222:


Duplicate of KAFKA-3217.

> SslProducerSendTest transient failure.
> --
>
> Key: KAFKA-3222
> URL: https://issues.apache.org/jira/browse/KAFKA-3222
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jiangjie Qin
>
> Saw the following error from time to time.
> {noformat}
> kafka.common.TopicExistsException: Topic "topic" already exists.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3223) Add System (ducktape) Test that asserts strict partition ordering despite node failure

2016-02-09 Thread Ben Stopford (JIRA)
Ben Stopford created KAFKA-3223:
---

 Summary: Add System (ducktape) Test that asserts strict partition 
ordering despite node failure
 Key: KAFKA-3223
 URL: https://issues.apache.org/jira/browse/KAFKA-3223
 Project: Kafka
  Issue Type: Test
Reporter: Ben Stopford


Kafka provides strong ordering when max.inflight.requests.per.connection = 1. 
There currently exists a bug (patched but not released) against this feature. 
It's an important feature for many customers so we should add a test to assert 
that the contract is honoured.

Suggest a similar format to ReassignPartitionsTest 
(reassign_partitions_test.py) or ReplicationTest (replication_test.py). It 
should be simply a case of asserting on the order of the messages (which are 
monotonically increasing numbers in these tests) in each partition with the 
inflight configuration.

Note that this jira is depended on the merge of KAFKA-3197 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3218) Kafka-0.9.0.0 does not work as OSGi module

2016-02-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15138504#comment-15138504
 ] 

ASF GitHub Bot commented on KAFKA-3218:
---

GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/888

KAFKA-3218: Use static classloading for default config classes

Static classloading is better for default classes used in config to ensure 
that the classes can be loaded in any environment (OSGi, JEE etc. which rely on 
different classloading strategies).

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-3218

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/888.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #888


commit c6fd1857fe085f95e9042745e24f762b8b4c550c
Author: Rajini Sivaram 
Date:   2016-02-09T08:05:06Z

KAFKA-3218: Use static classloading for default config classes




> Kafka-0.9.0.0 does not work as OSGi module
> --
>
> Key: KAFKA-3218
> URL: https://issues.apache.org/jira/browse/KAFKA-3218
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
> Environment: Apache Felix OSGi container
> jdk_1.8.0_60
>Reporter: Joe O'Connor
>Assignee: Rajini Sivaram
> Attachments: ContextClassLoaderBug.tar.gz
>
>
> KAFKA-2295 changed all Class.forName() calls to use 
> currentThread().getContextClassLoader() instead of the default "classloader 
> that loaded the current class". 
> OSGi loads each module's classes using a separate classloader so this is now 
> broken.
> Steps to reproduce: 
> # install the kafka-clients servicemix OSGi module 0.9.0.0_1
> # attempt to initialize the Kafka producer client from Java code 
> Expected results: 
> - call to "new KafkaProducer()" succeeds
> Actual results: 
> - "new KafkaProducer()" throws ConfigException:
> {quote}Suppressed: java.lang.Exception: Error starting bundle54: 
> Activator start error in bundle com.openet.testcase.ContextClassLoaderBug 
> [54].
> at 
> org.apache.karaf.bundle.command.BundlesCommand.doExecute(BundlesCommand.java:66)
> ... 12 more
> Caused by: org.osgi.framework.BundleException: Activator start error 
> in bundle com.openet.testcase.ContextClassLoaderBug [54].
> at 
> org.apache.felix.framework.Felix.activateBundle(Felix.java:2276)
> at 
> org.apache.felix.framework.Felix.startBundle(Felix.java:2144)
> at 
> org.apache.felix.framework.BundleImpl.start(BundleImpl.java:998)
> at 
> org.apache.karaf.bundle.command.Start.executeOnBundle(Start.java:38)
> at 
> org.apache.karaf.bundle.command.BundlesCommand.doExecute(BundlesCommand.java:64)
> ... 12 more
> Caused by: java.lang.ExceptionInInitializerError
> at 
> org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:156)
> at com.openet.testcase.Activator.start(Activator.java:16)
> at 
> org.apache.felix.framework.util.SecureAction.startActivator(SecureAction.java:697)
> at 
> org.apache.felix.framework.Felix.activateBundle(Felix.java:2226)
> ... 16 more
> *Caused by: org.apache.kafka.common.config.ConfigException: Invalid 
> value org.apache.kafka.clients.producer.internals.DefaultPartitioner for 
> configuration partitioner.class: Class* 
> *org.apache.kafka.clients.producer.internals.DefaultPartitioner could not be 
> found.*
> at 
> org.apache.kafka.common.config.ConfigDef.parseType(ConfigDef.java:255)
> at 
> org.apache.kafka.common.config.ConfigDef.define(ConfigDef.java:78)
> at 
> org.apache.kafka.common.config.ConfigDef.define(ConfigDef.java:94)
> at 
> org.apache.kafka.clients.producer.ProducerConfig.(ProducerConfig.java:206)
> {quote}
> Workaround is to call "currentThread().setContextClassLoader(null)" before 
> initializing the kafka producer.
> Possible fix is to catch ClassNotFoundException at ConfigDef.java:247 and 
> retry the Class.forName() call with the default classloader. However with 
> this fix there is still a problem at AbstractConfig.java:206,  where the 
> newInstance() call succeeds but "instanceof" is false because the classes 
> were loaded by different classloaders.
> Testcase attached, see README.txt for instructions.
> See also SM-2743



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3218: Use static classloading for defaul...

2016-02-09 Thread rajinisivaram
GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/888

KAFKA-3218: Use static classloading for default config classes

Static classloading is better for default classes used in config to ensure 
that the classes can be loaded in any environment (OSGi, JEE etc. which rely on 
different classloading strategies).

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-3218

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/888.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #888


commit c6fd1857fe085f95e9042745e24f762b8b4c550c
Author: Rajini Sivaram 
Date:   2016-02-09T08:05:06Z

KAFKA-3218: Use static classloading for default config classes




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-3218) Kafka-0.9.0.0 does not work as OSGi module

2016-02-09 Thread Rajini Sivaram (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram updated KAFKA-3218:
--
Status: Patch Available  (was: Open)

The PR addresses the loading of default classes for configuration properties by 
switching to static classloading to enable Kafka classes to be loaded in 
different environments like JEE, OSGi etc. which use different classloading 
strategies. Most properties supplied by the application (eg. key.serializer) 
can already be specified as classes to avoid relying on the current 
classloading strategy in Kafka:
{quote}
properties.put("key.serializer", 
org.apache.kafka.common.serialization.StringSerializer.class)
{quote}

Joe, can you check if this solution works for you?

To enable all features of Kafka to work well in OSGi, all uses of dynamic 
classloading need to be fixed. This needs more work and it would be better to 
do this with a KIP.


> Kafka-0.9.0.0 does not work as OSGi module
> --
>
> Key: KAFKA-3218
> URL: https://issues.apache.org/jira/browse/KAFKA-3218
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
> Environment: Apache Felix OSGi container
> jdk_1.8.0_60
>Reporter: Joe O'Connor
>Assignee: Rajini Sivaram
> Attachments: ContextClassLoaderBug.tar.gz
>
>
> KAFKA-2295 changed all Class.forName() calls to use 
> currentThread().getContextClassLoader() instead of the default "classloader 
> that loaded the current class". 
> OSGi loads each module's classes using a separate classloader so this is now 
> broken.
> Steps to reproduce: 
> # install the kafka-clients servicemix OSGi module 0.9.0.0_1
> # attempt to initialize the Kafka producer client from Java code 
> Expected results: 
> - call to "new KafkaProducer()" succeeds
> Actual results: 
> - "new KafkaProducer()" throws ConfigException:
> {quote}Suppressed: java.lang.Exception: Error starting bundle54: 
> Activator start error in bundle com.openet.testcase.ContextClassLoaderBug 
> [54].
> at 
> org.apache.karaf.bundle.command.BundlesCommand.doExecute(BundlesCommand.java:66)
> ... 12 more
> Caused by: org.osgi.framework.BundleException: Activator start error 
> in bundle com.openet.testcase.ContextClassLoaderBug [54].
> at 
> org.apache.felix.framework.Felix.activateBundle(Felix.java:2276)
> at 
> org.apache.felix.framework.Felix.startBundle(Felix.java:2144)
> at 
> org.apache.felix.framework.BundleImpl.start(BundleImpl.java:998)
> at 
> org.apache.karaf.bundle.command.Start.executeOnBundle(Start.java:38)
> at 
> org.apache.karaf.bundle.command.BundlesCommand.doExecute(BundlesCommand.java:64)
> ... 12 more
> Caused by: java.lang.ExceptionInInitializerError
> at 
> org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:156)
> at com.openet.testcase.Activator.start(Activator.java:16)
> at 
> org.apache.felix.framework.util.SecureAction.startActivator(SecureAction.java:697)
> at 
> org.apache.felix.framework.Felix.activateBundle(Felix.java:2226)
> ... 16 more
> *Caused by: org.apache.kafka.common.config.ConfigException: Invalid 
> value org.apache.kafka.clients.producer.internals.DefaultPartitioner for 
> configuration partitioner.class: Class* 
> *org.apache.kafka.clients.producer.internals.DefaultPartitioner could not be 
> found.*
> at 
> org.apache.kafka.common.config.ConfigDef.parseType(ConfigDef.java:255)
> at 
> org.apache.kafka.common.config.ConfigDef.define(ConfigDef.java:78)
> at 
> org.apache.kafka.common.config.ConfigDef.define(ConfigDef.java:94)
> at 
> org.apache.kafka.clients.producer.ProducerConfig.(ProducerConfig.java:206)
> {quote}
> Workaround is to call "currentThread().setContextClassLoader(null)" before 
> initializing the kafka producer.
> Possible fix is to catch ClassNotFoundException at ConfigDef.java:247 and 
> retry the Class.forName() call with the default classloader. However with 
> this fix there is still a problem at AbstractConfig.java:206,  where the 
> newInstance() call succeeds but "instanceof" is false because the classes 
> were loaded by different classloaders.
> Testcase attached, see README.txt for instructions.
> See also SM-2743



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3218) Kafka-0.9.0.0 does not work as OSGi module

2016-02-09 Thread Joe O'Connor (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15139026#comment-15139026
 ] 

Joe O'Connor commented on KAFKA-3218:
-

Hi Rajini,
Thanks for the suggested workaround, we will try that out and let you know if 
it works.
Regards
Joe 

> Kafka-0.9.0.0 does not work as OSGi module
> --
>
> Key: KAFKA-3218
> URL: https://issues.apache.org/jira/browse/KAFKA-3218
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
> Environment: Apache Felix OSGi container
> jdk_1.8.0_60
>Reporter: Joe O'Connor
>Assignee: Rajini Sivaram
> Attachments: ContextClassLoaderBug.tar.gz
>
>
> KAFKA-2295 changed all Class.forName() calls to use 
> currentThread().getContextClassLoader() instead of the default "classloader 
> that loaded the current class". 
> OSGi loads each module's classes using a separate classloader so this is now 
> broken.
> Steps to reproduce: 
> # install the kafka-clients servicemix OSGi module 0.9.0.0_1
> # attempt to initialize the Kafka producer client from Java code 
> Expected results: 
> - call to "new KafkaProducer()" succeeds
> Actual results: 
> - "new KafkaProducer()" throws ConfigException:
> {quote}Suppressed: java.lang.Exception: Error starting bundle54: 
> Activator start error in bundle com.openet.testcase.ContextClassLoaderBug 
> [54].
> at 
> org.apache.karaf.bundle.command.BundlesCommand.doExecute(BundlesCommand.java:66)
> ... 12 more
> Caused by: org.osgi.framework.BundleException: Activator start error 
> in bundle com.openet.testcase.ContextClassLoaderBug [54].
> at 
> org.apache.felix.framework.Felix.activateBundle(Felix.java:2276)
> at 
> org.apache.felix.framework.Felix.startBundle(Felix.java:2144)
> at 
> org.apache.felix.framework.BundleImpl.start(BundleImpl.java:998)
> at 
> org.apache.karaf.bundle.command.Start.executeOnBundle(Start.java:38)
> at 
> org.apache.karaf.bundle.command.BundlesCommand.doExecute(BundlesCommand.java:64)
> ... 12 more
> Caused by: java.lang.ExceptionInInitializerError
> at 
> org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:156)
> at com.openet.testcase.Activator.start(Activator.java:16)
> at 
> org.apache.felix.framework.util.SecureAction.startActivator(SecureAction.java:697)
> at 
> org.apache.felix.framework.Felix.activateBundle(Felix.java:2226)
> ... 16 more
> *Caused by: org.apache.kafka.common.config.ConfigException: Invalid 
> value org.apache.kafka.clients.producer.internals.DefaultPartitioner for 
> configuration partitioner.class: Class* 
> *org.apache.kafka.clients.producer.internals.DefaultPartitioner could not be 
> found.*
> at 
> org.apache.kafka.common.config.ConfigDef.parseType(ConfigDef.java:255)
> at 
> org.apache.kafka.common.config.ConfigDef.define(ConfigDef.java:78)
> at 
> org.apache.kafka.common.config.ConfigDef.define(ConfigDef.java:94)
> at 
> org.apache.kafka.clients.producer.ProducerConfig.(ProducerConfig.java:206)
> {quote}
> Workaround is to call "currentThread().setContextClassLoader(null)" before 
> initializing the kafka producer.
> Possible fix is to catch ClassNotFoundException at ConfigDef.java:247 and 
> retry the Class.forName() call with the default classloader. However with 
> this fix there is still a problem at AbstractConfig.java:206,  where the 
> newInstance() call succeeds but "instanceof" is false because the classes 
> were loaded by different classloaders.
> Testcase attached, see README.txt for instructions.
> See also SM-2743



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3055) JsonConverter mangles schema during serialization (fromConnectData)

2016-02-09 Thread Kishore Senji (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15139051#comment-15139051
 ] 

Kishore Senji commented on KAFKA-3055:
--

Please look here: 
https://github.com/ksenji/kafka-connect-test/blob/master/src/test/java/org/apache/kafka/connect/json/JsonConverterWithNoCacheTest.java

> JsonConverter mangles schema during serialization (fromConnectData)
> ---
>
> Key: KAFKA-3055
> URL: https://issues.apache.org/jira/browse/KAFKA-3055
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Affects Versions: 0.9.0.0
>Reporter: Kishore Senji
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.0.1, 0.9.1.0
>
>
> Test case is here: 
> https://github.com/ksenji/kafka-connect-test/tree/master/src/test/java/org/apache/kafka/connect/json
> If Caching is disabled, it behaves correctly and JsonConverterWithNoCacheTest 
> runs successfully. Otherwise the test JsonConverterTest fails.
> The reason is that the JsonConverter has a bug where it mangles the schema as 
> it assigns all String fields with the same name (and similar for all Int32 
> fields)
> This is how the schema & payload gets serialized for the Person Struct (with 
> caching disabled):
> {code}
> {"schema":{"type":"struct","fields":[{"type":"string","optional":false,"field":"firstName"},{"type":"string","optional":false,"field":"lastName"},{"type":"string","optional":false,"field":"email"},{"type":"int32","optional":false,"field":"age"},{"type":"int32","optional":false,"field":"weightInKgs"}],"optional":false,"name":"Person"},"payload":{"firstName":"Eric","lastName":"Cartman","email":"eric.cart...@southpark.com","age":10,"weightInKgs":40}}
> {code}
> where as when caching is enabled the same Struct gets serialized as (with 
> caching enabled) :
> {code}
> {"schema":{"type":"struct","fields":[{"type":"string","optional":false,"field":"email"},{"type":"string","optional":false,"field":"email"},{"type":"string","optional":false,"field":"email"},{"type":"int32","optional":false,"field":"weightInKgs"},{"type":"int32","optional":false,"field":"weightInKgs"}],"optional":false,"name":"Person"},"payload":{"firstName":"Eric","lastName":"Cartman","email":"eric.cart...@southpark.com","age":10,"weightInKgs":40}}
> {code}
> As we can see all String fields became "email" and all int32 fields became 
> "weightInKgs". 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3224) Add timestamp-based log deletion policy

2016-02-09 Thread Bill Warshaw (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Warshaw updated KAFKA-3224:

Description: 
One of Kafka's officially-described use cases is a distributed commit log 
(http://kafka.apache.org/documentation.html#uses_commitlog). In this case, for 
a distributed service that needed a commit log, there would be a topic with a 
single partition to guarantee log order. This service would use the commit log 
to re-sync failed nodes. Kafka is generally an excellent fit for such a system, 
but it does not expose an adequate mechanism for log cleanup in such a case. 
With a distributed commit log, data can only be deleted when the client 
application determines that it is no longer needed; this creates completely 
arbitrary ranges of time and size for messages, which the existing cleanup 
mechanisms can't handle smoothly.
A new deletion policy based on the absolute timestamp of a message would work 
perfectly for this case.  The client application will periodically update the 
minimum timestamp of messages to retain, and Kafka will delete all messages 
earlier than that timestamp using the existing log cleaner thread mechanism.
This is based off of the work being done in KIP-32 - Add timestamps to Kafka 
message.

h3. Initial Approach
https://github.com/apache/kafka/commit/2c51ae3cead99432ebf19f0303f8cc797723c939

  was:
One of Kafka's officially-described use cases is a distributed commit log 
(http://kafka.apache.org/documentation.html#uses_commitlog). In this case, for 
a distributed service that needed a commit log, there would be a topic with a 
single partition to guarantee log order. This service would use the commit log 
to re-sync failed nodes. Kafka is generally an excellent fit for such a system, 
but it does not expose an adequate mechanism for log cleanup in such a case. 
With a distributed commit log, data can only be deleted when the client 
application determines that it is no longer needed; this creates completely 
arbitrary ranges of time and size for messages, which the existing cleanup 
mechanisms can't handle smoothly.
A new deletion policy based on the absolute timestamp of a message would work 
perfectly for this case.  The client application will periodically update the 
minimum timestamp of messages to retain, and Kafka will delete all messages 
earlier than that timestamp using the existing log cleaner thread mechanism.
This is based off of the work being done in KIP-32 - Add timestamps to Kafka 
message.


> Add timestamp-based log deletion policy
> ---
>
> Key: KAFKA-3224
> URL: https://issues.apache.org/jira/browse/KAFKA-3224
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Bill Warshaw
>  Labels: kafka
>
> One of Kafka's officially-described use cases is a distributed commit log 
> (http://kafka.apache.org/documentation.html#uses_commitlog). In this case, 
> for a distributed service that needed a commit log, there would be a topic 
> with a single partition to guarantee log order. This service would use the 
> commit log to re-sync failed nodes. Kafka is generally an excellent fit for 
> such a system, but it does not expose an adequate mechanism for log cleanup 
> in such a case. With a distributed commit log, data can only be deleted when 
> the client application determines that it is no longer needed; this creates 
> completely arbitrary ranges of time and size for messages, which the existing 
> cleanup mechanisms can't handle smoothly.
> A new deletion policy based on the absolute timestamp of a message would work 
> perfectly for this case.  The client application will periodically update the 
> minimum timestamp of messages to retain, and Kafka will delete all messages 
> earlier than that timestamp using the existing log cleaner thread mechanism.
> This is based off of the work being done in KIP-32 - Add timestamps to Kafka 
> message.
> h3. Initial Approach
> https://github.com/apache/kafka/commit/2c51ae3cead99432ebf19f0303f8cc797723c939



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[DISCUSS] KIP-47 - Add timestamp-based log deletion policy

2016-02-09 Thread Bill Warshaw
Hello,

I just submitted KIP-47 for adding a new log deletion policy based on a
minimum timestamp of messages to retain.

https://cwiki.apache.org/confluence/display/KAFKA/KIP-47+-+Add+timestamp-based+log+deletion+policy

I'm open to any comments or suggestions.

Thanks,
Bill Warshaw


[jira] [Created] (KAFKA-3224) Add timestamp-based log deletion policy

2016-02-09 Thread Bill Warshaw (JIRA)
Bill Warshaw created KAFKA-3224:
---

 Summary: Add timestamp-based log deletion policy
 Key: KAFKA-3224
 URL: https://issues.apache.org/jira/browse/KAFKA-3224
 Project: Kafka
  Issue Type: Improvement
Reporter: Bill Warshaw


One of Kafka's officially-described use cases is a distributed commit log 
(http://kafka.apache.org/documentation.html#uses_commitlog). In this case, for 
a distributed service that needed a commit log, there would be a topic with a 
single partition to guarantee log order. This service would use the commit log 
to re-sync failed nodes. Kafka is generally an excellent fit for such a system, 
but it does not expose an adequate mechanism for log cleanup in such a case. 
With a distributed commit log, data can only be deleted when the client 
application determines that it is no longer needed; this creates completely 
arbitrary ranges of time and size for messages, which the existing cleanup 
mechanisms can't handle smoothly.
A new deletion policy based on the absolute timestamp of a message would work 
perfectly for this case.  The client application will periodically update the 
minimum timestamp of messages to retain, and Kafka will delete all messages 
earlier than that timestamp using the existing log cleaner thread mechanism.
This is based off of the work being done in KIP-32 - Add timestamps to Kafka 
message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: Use explicit type in AclCommand

2016-02-09 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/885


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-611) Migration tool and Mirror Maker ignore mirror.topics.whitelist in config

2016-02-09 Thread Joe Hohertz (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15139187#comment-15139187
 ] 

Joe Hohertz commented on KAFKA-611:
---

I've recently run into this in MirrorMaker, and am considering work on a fix as 
I need per-source whitelisting, and the current approach applies the whitelist 
globally. It looks like it was partially implemented as a part of KAFKA-74, 
and/or has regressed since.

The approach I am considering is to treat the commandline param if supplied as 
a global default, and individual source properties can override with their own 
value. Would be interested in anyone else's thoughts on this.

> Migration tool and Mirror Maker ignore mirror.topics.whitelist in config
> 
>
> Key: KAFKA-611
> URL: https://issues.apache.org/jira/browse/KAFKA-611
> Project: Kafka
>  Issue Type: Bug
>  Components: config
>Affects Versions: 0.7, 0.8.0
>Reporter: Chris Riccomini
>
> Apparently Kafka ignores the "mirror.topics.whitelist" setting in both 0.7 
> MirrorMaker and 0.8 KafkaMigrationTool. It only appears to pay attention to 
> --whitelist.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3187: Make kafka-acls.sh help messages m...

2016-02-09 Thread SinghAsDev
GitHub user SinghAsDev opened a pull request:

https://github.com/apache/kafka/pull/892

KAFKA-3187: Make kafka-acls.sh help messages more generic.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/SinghAsDev/kafka KAFKA-3187

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/892.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #892


commit bdcef12a01de88d5fcc7ac7225f45030d22b8762
Author: Ashish Singh 
Date:   2016-02-09T21:24:32Z

KAFKA-3187: Make kafka-acls.sh help messages more generic.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-3187) Make kafka-acls.sh help messages more generic.

2016-02-09 Thread Ashish K Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish K Singh updated KAFKA-3187:
--
Fix Version/s: 0.9.0.1

> Make kafka-acls.sh help messages more generic.
> --
>
> Key: KAFKA-3187
> URL: https://issues.apache.org/jira/browse/KAFKA-3187
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.9.0.0
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
> Fix For: 0.9.0.1
>
>
> Make kafka-acls.sh help messages generic, agnostic of 
> {{SimpleAclsAuthorizer}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3187) Make kafka-acls.sh help messages more generic.

2016-02-09 Thread Ashish K Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish K Singh updated KAFKA-3187:
--
Affects Version/s: 0.9.0.0

> Make kafka-acls.sh help messages more generic.
> --
>
> Key: KAFKA-3187
> URL: https://issues.apache.org/jira/browse/KAFKA-3187
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.9.0.0
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
> Fix For: 0.9.0.1
>
>
> Make kafka-acls.sh help messages generic, agnostic of 
> {{SimpleAclsAuthorizer}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Stormkafka deploy error

2016-02-09 Thread Larry Lin
Hi guys,
I am a Kafka newbie. An error occurred when I run following command#sudo 
./storm jar [jar-path] [topology class] [topology name] 
The error tracking is here:**Exception 
in thread "main" java.lang.NoClassDefFoundError: storm/kafka/BrokerHosts        
at java.lang.Class.getDeclaredMethods0(Native Method)        at 
java.lang.Class.privateGetDeclaredMethods(Class.java:2625)        at 
java.lang.Class.getMethod0(Class.java:2866)        at 
java.lang.Class.getMethod(Class.java:1676)        at 
sun.launcher.LauncherHelper.getMainMethod(LauncherHelper.java:494)        at 
sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:486)Caused by: 
java.lang.ClassNotFoundException: storm.kafka.BrokerHosts        at 
java.net.URLClassLoader$1.run(URLClassLoader.java:366)        at 
java.net.URLClassLoader$1.run(URLClassLoader.java:355)        at 
java.security.AccessController.doPrivileged(Native Method)        at 
java.net.URLClassLoader.findClass(URLClassLoader.java:354)        at 
java.lang.ClassLoader.loadClass(ClassLoader.java:425)        at 
sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)        at 
java.lang.ClassLoader.loadClass(ClassLoader.java:358)***
Does anyone has any idea about this?   
Thanks a lot!

Regards,
Larry

Re: [POLL] Make next Kafka Release 0.10.0.0 instead of 0.9.1.0

2016-02-09 Thread Becket Qin
Hi Ismael,

Good point on the upgrade path. The release plan says that 0.10.0.0 is
targeted in Q2, 2016. If that is accurate, there are 6-7 months between
0.9.0.0 and 0.10.0.0, i.e. there are still about five months before the
next release. Our previous documentation only provides documentation on how
to upgrade from the last official release. If the releasing interval is a
concern, we can add documentation on how to upgrade from 0.8.x to 0.10.0.0
directly. Alternatively, we can suggest user to first upgrade to 0.9.0.0
then upgrade to 0.10.0.0.

Compatibility wise, personally I think a major version bump implies there
can be some backward incompatible change and people have to read upgrade
instructions before upgrade. We need to make it very clear in our upgrade
documentation like we did for 0.9.0.0.

Regarding the version renaming, actually I was one the people who was
confused about 0.8.3 to 0.9.0.0 rename. My take from last rename is that if
we have significant new features added, a new major release is preferred.
This is the same rationale behind this poll. The significance might be a
little subjective, but somehow people seem to have similar threshold on
that. :)

Thanks,

Jiangjie (Becket) Qin


On Tue, Feb 9, 2016 at 10:43 AM, Ismael Juma  wrote:

> Hi Becket,
>
> Thanks for starting the discussion.
>
> Given the significance of the changes, I think 0.10.0.0 is appropriate.
>
> However, I think we have to be careful about the fact that we will be
> releasing two major releases within a short period of time. This means that
> 0.10.0.0 may be out before many have have upgraded to 0.9.0.x. It would be
> good to understand the upgrade path for such people. Finally, we should be
> very clear about what the version number means in terms of compatibility.
> Some people were confused by the significance of the 0.8.3.0 to 0.9.0.0
> rename.
>
> Ismael
>
> On Tue, Feb 9, 2016 at 6:07 PM, Becket Qin  wrote:
>
> > Hi All,
> >
> > Next Kafka release will have several significant important new
> > feature/changes such as Kafka Stream, Message Format Change, Client
> > Interceptors and several new consumer API changes, etc. We feel it is
> > better to make next Kafka release 0.10.0.0 instead of 0.9.1.0.
> >
> > We would like to see what do people think of making the next release
> > 0.10.0.0.
> >
> > Thanks,
> >
> > Jiangjie (Becket) Qin
> >
>


Re: [DISCUSS] KIP-46: Self Healing

2016-02-09 Thread Aditya Auradkar
Hi Neha,

Thanks for the detailed reply and apologies for my late response. I do have
a few comments.

1. Replica Throttling - I agree this is rather important to get done.
However, it may also be argued that this problem is orthogonal. We do not
have these protections currently yet we do run partition reassignment
fairly often. Having said that, I'm perfectly happy to tackle KIP-46 after
this problem is solved. I understand it is actively being discussed in
KAFKA-1464.

2. Pluggable policies - Can you elaborate on the need for pluggable
policies in the partition reassignment tool? Even if we make it pluggable
to begin with, this needs to ship with a default policy that makes sense
for most users. IMO, partition count is the most intuitive default and is
analogous to how we stripe partitions for new topics.

3. Even if the trigger were fully manual (as it is now), we could still
have the controller generate the assignment as per a configured policy i.e.
effectively the tool is built into Kafka itself. Following this approach to
begin with makes it easier to fully automate in the future since we will
only need to automate the trigger later.

Aditya



On Wed, Feb 3, 2016 at 1:57 PM, Neha Narkhede  wrote:

> Adi,
>
> Thanks for the write-up. Here are my thoughts:
>
> I think you are suggesting a way of automating resurrecting a topic’s
> replication factor in the presence of a specific scenario: in the event of
> permanent broker failures. I agree that the partition reassignment
> mechanism should be used to add replicas when they are lost to permanent
> broker failures. But I think the KIP probably chews off more than we can
> digest.
>
> Before we automate detection of permanent broker failures and have the
> controller mitigate through automatic data balancing, I’d like to point out
> that our current difficulty is not that but the ability to generate a
> workable partition assignment for rebalancing data in a cluster.
>
> There are 2 problems with partition rebalancing today:
>
>1. Lack of replica throttling for balancing data: In the absence of
>replica throttling, even if you come up with an assignment that might be
>workable, it isn’t practical to kick it off without worrying about
> bringing
>the entire cluster down. I don’t think the hack of moving partitions in
>batches is effective as it at-best a best guess.
>2. Lack of support for policies in the rebalance tool that automatically
>generate a workable partition assignment: There is no easy way to
> generate
>a partition reassignment JSON file. An example of a policy is “end up
> with
>an equal number of partitions on every broker while minimizing data
>movement”. There might be other policies that might make sense, we’d
> have
>to experiment.
>
> Broadly speaking, the data balancing problem is comprised of 3 parts:
>
>1. Trigger: An event that triggers data balancing to take place. KIP-46
>suggests a specific trigger and that is permanent broker failure. But
> there
>might be several other events that might make sense — Cluster expansion,
>decommissioning brokers, data imbalance
>2. Policy: Given a set of constraints, generate a target partition
>assignment that can be executed when triggered.
>3. Mechanism: Given a partition assignment, make the state changes and
>actually move the data until the target assignment is achieved.
>
> Currently, the trigger is manual through the rebalance tool, there is no
> support for any viable policy today and we have a built-in mechanism that,
> given a policy and upon a trigger, moves data in a cluster but does not
> support throttling.
>
> Given that both the policy and the throttling improvement to the mechanism
> are hard problems and given our past experience of operationalizing
> partition reassignment (required months of testing before we got it right),
> I strongly recommend attacking this problem in stages. I think a more
> practical approach would be to add the concept of pluggable policies in the
> rebalance tool, implement a practical policy that generates a workable
> partition assignment upon triggering the tool and improve the mechanism to
> support throttling so that a given policy can succeed without manual
> intervention. If we solved these problems first, the rebalance tool would
> be much more accessible to Kafka users and operators.
>
> Assuming that we do this, the problem that KIP-46 aims to solve becomes
> much easier. You can separate the detection of permanent broker failures
> (trigger) from the mitigation (above-mentioned improvements to data
> balancing). The latter will be a native capability in Kafka. Detecting
> permanent hardware failures is much easily done via an external script that
> uses a simple health check. (Part 1 of KIP-46).
>
> I agree that it will be great to *eventually* be able to fully automate
> both the trigger as well as the policies while also improving the
> 

[jira] [Commented] (KAFKA-2375) Implement elasticsearch Copycat sink connector

2016-02-09 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15139836#comment-15139836
 ] 

Ewen Cheslack-Postava commented on KAFKA-2375:
--

[~arttii] Great to hear you're working on this! This JIRA was filed early on 
when we weren't sure if we would include some more complex examples of 
connectors with Kafka itself (in which case dependencies might be more of a 
concern, thus the comment about using the REST API). Using the Java API 
provided with ES can definitely make sense if you're not trying to limit 
dependencies.

I think plenty of folks in the community are interested in more open source 
connectors, especially for popular systems. Generally we're planning to keep 
most connector plugins as external libraries so development is federated (and 
not limited by the throughput of Kafka committers reviewing code!). If you're 
thinking about sharing it, I'd recommend putting it up on GitHub/BitBucket/your 
preferred repository hosting service and posting a note to the mailing list 
about it.

> Implement elasticsearch Copycat sink connector
> --
>
> Key: KAFKA-2375
> URL: https://issues.apache.org/jira/browse/KAFKA-2375
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Liquan Pei
>
> Implement an elasticsearch sink connector for Copycat. This should send 
> records to elasticsearch with unique document IDs, given appropriate configs 
> to extract IDs from input records.
> The motivation here is to provide a good end-to-end example with built-in 
> connectors that require minimal dependencies. Because Elasticsearch has a 
> very simple REST API, an elasticsearch connector shouldn't require any extra 
> dependencies and logs -> Elasticsearch (in combination with KAFKA-2374) 
> provides a compelling out-of-the-box Copycat use case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #1031

2016-02-09 Thread Apache Jenkins Server
See 

Changes:

[me] HOTFIX: open window segments in order, add segment id check in

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-4 (docker Ubuntu ubuntu4 ubuntu yahoo-not-h2) in 
workspace 
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision b5e6b8671a5b6d97d5026261ae8d62b54f068e53 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f b5e6b8671a5b6d97d5026261ae8d62b54f068e53
 > git rev-list 4fd60c6120dc4330deb7317360c835b04bc6cb9a # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson2811619028456682574.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 21.766 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson2258063974474508891.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.10/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:streams:examples:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk7:clients:compileJava
:jar_core_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '/home/jenkins/.gradle/caches/modules-2/files-2.1/net.jpountz.lz4/lz4/1.3.0/c708bb2590c0652a642236ef45d9f99ff842a2ce/lz4-1.3.0.jar'
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 26.849 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51


[GitHub] kafka pull request: HOTFIX: open window segments in order, add seg...

2016-02-09 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/891


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2750) Sender.java: handleProduceResponse does not check protocol version

2016-02-09 Thread Felix GV (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15139784#comment-15139784
 ] 

Felix GV commented on KAFKA-2750:
-

Shouldn't the producer automatically do a graceful degradation of its protocol, 
without even bubbling anything up to the user?

> Sender.java: handleProduceResponse does not check protocol version
> --
>
> Key: KAFKA-2750
> URL: https://issues.apache.org/jira/browse/KAFKA-2750
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Geoff Anderson
>
> If you try run an 0.9 producer against 0.8.2.2 kafka broker, you get a fairly 
> cryptic error message:
> [2015-11-04 18:55:43,583] ERROR Uncaught error in kafka producer I/O thread:  
> (org.apache.kafka.clients.producer.internals.Sender)
> org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
> 'throttle_time_ms': java.nio.BufferUnderflowException
>   at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:71)
>   at 
> org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:462)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:279)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:216)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:141)
> Although we shouldn't expect an 0.9 producer to work against an 0.8.X broker 
> since the protocol version has been increased, perhaps the error could be 
> clearer.
> The cause seems to be that in Sender.java, handleProduceResponse does not to 
> have any mechanism for checking the protocol version of the received produce 
> response - it just calls a constructor which blindly tries to grab the 
> throttle time field which in this case fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3187) Make kafka-acls.sh help messages more generic.

2016-02-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15139795#comment-15139795
 ] 

ASF GitHub Bot commented on KAFKA-3187:
---

GitHub user SinghAsDev opened a pull request:

https://github.com/apache/kafka/pull/892

KAFKA-3187: Make kafka-acls.sh help messages more generic.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/SinghAsDev/kafka KAFKA-3187

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/892.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #892


commit bdcef12a01de88d5fcc7ac7225f45030d22b8762
Author: Ashish Singh 
Date:   2016-02-09T21:24:32Z

KAFKA-3187: Make kafka-acls.sh help messages more generic.




> Make kafka-acls.sh help messages more generic.
> --
>
> Key: KAFKA-3187
> URL: https://issues.apache.org/jira/browse/KAFKA-3187
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> Make kafka-acls.sh help messages generic, agnostic of 
> {{SimpleAclsAuthorizer}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3187) Make kafka-acls.sh help messages more generic.

2016-02-09 Thread Ashish K Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish K Singh updated KAFKA-3187:
--
Status: Patch Available  (was: Open)

> Make kafka-acls.sh help messages more generic.
> --
>
> Key: KAFKA-3187
> URL: https://issues.apache.org/jira/browse/KAFKA-3187
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> Make kafka-acls.sh help messages generic, agnostic of 
> {{SimpleAclsAuthorizer}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #358

2016-02-09 Thread Apache Jenkins Server
See 

Changes:

[me] HOTFIX: open window segments in order, add segment id check in

--
[...truncated 82 lines...]
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/server/KafkaServer.scala:305:
 a pure expression does nothing in statement position; you may be omitting 
necessary parentheses
ControllerStats.leaderElectionTimer
^
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/controller/ControllerChannelManager.scala:389:
 class BrokerEndPoint in object UpdateMetadataRequest is deprecated: see 
corresponding Javadoc for more information.
  new UpdateMetadataRequest.BrokerEndPoint(brokerEndPoint.id, 
brokerEndPoint.host, brokerEndPoint.port)
^
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/controller/ControllerChannelManager.scala:391:
 constructor UpdateMetadataRequest in class UpdateMetadataRequest is 
deprecated: see corresponding Javadoc for more information.
new UpdateMetadataRequest(controllerId, controllerEpoch, 
liveBrokers.asJava, partitionStates.asJava)
^
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/network/BlockingChannel.scala:129:
 method readFromReadableChannel in class NetworkReceive is deprecated: see 
corresponding Javadoc for more information.
  response.readFromReadableChannel(channel)
   ^
there were 15 feature warning(s); re-run with -feature for details
11 warnings found
warning: [options] bootstrap class path not set in conjunction with -source 1.7
1 warning
:kafka-trunk-jdk8:core:processResources UP-TO-DATE
:kafka-trunk-jdk8:core:classes
:kafka-trunk-jdk8:clients:compileTestJavawarning: [options] bootstrap class 
path not set in conjunction with -source 1.7
Note: 
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/clients/src/test/java/org/apache/kafka/common/requests/RequestResponseTest.java
 uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
1 warning

:kafka-trunk-jdk8:clients:processTestResources
:kafka-trunk-jdk8:clients:testClasses
:kafka-trunk-jdk8:core:copyDependantLibs
:kafka-trunk-jdk8:core:copyDependantTestLibs
:kafka-trunk-jdk8:core:jar
:jar_core_2_11
Building project 'core' with Scala version 2.11.7
:kafka-trunk-jdk8:clients:compileJava UP-TO-DATE
:kafka-trunk-jdk8:clients:processResources UP-TO-DATE
:kafka-trunk-jdk8:clients:classes UP-TO-DATE
:kafka-trunk-jdk8:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk8:clients:createVersionFile
:kafka-trunk-jdk8:clients:jar UP-TO-DATE
:kafka-trunk-jdk8:core:compileJava UP-TO-DATE
:kafka-trunk-jdk8:core:compileScalaJava HotSpot(TM) 64-Bit Server VM warning: 
ignoring option MaxPermSize=512m; support was removed in 8.0

/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/api/OffsetCommitRequest.scala:79:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/common/OffsetMetadataAndError.scala:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,

  ^
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/common/OffsetMetadataAndError.scala:37:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 expireTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP) {

  ^
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/coordinator/GroupMetadataManager.scala:395:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
  if (value.expireTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

^
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/server/KafkaApis.scala:293:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
if 

[jira] [Commented] (KAFKA-3216) "Modifying topics" section incorrectly says you can't change replication factor.

2016-02-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15139367#comment-15139367
 ] 

ASF GitHub Bot commented on KAFKA-3216:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/881


> "Modifying topics" section incorrectly says you can't change replication 
> factor.
> 
>
> Key: KAFKA-3216
> URL: https://issues.apache.org/jira/browse/KAFKA-3216
> Project: Kafka
>  Issue Type: Bug
>Reporter: James Cheng
>Assignee: James Cheng
> Fix For: 0.9.0.1, 0.9.1.0
>
>
> The "Modifying Topics" section of the docs 
> (http://kafka.apache.org/documentation.html#basic_ops_modify_topic) says 
> {quote} Kafka does not currently support reducing the number of partitions 
> for a topic or changing the replication factor. {quote}
> But you *can* modify the replication factor. That second half of the sentence 
> should be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[POLL] Make next Kafka Release 0.10.0.0 instead of 0.9.1.0

2016-02-09 Thread Becket Qin
Hi All,

Next Kafka release will have several significant important new
feature/changes such as Kafka Stream, Message Format Change, Client
Interceptors and several new consumer API changes, etc. We feel it is
better to make next Kafka release 0.10.0.0 instead of 0.9.1.0.

We would like to see what do people think of making the next release
0.10.0.0.

Thanks,

Jiangjie (Becket) Qin


[jira] [Assigned] (KAFKA-3093) Keep track of connector and task status info, expose it via the REST API

2016-02-09 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson reassigned KAFKA-3093:
--

Assignee: Jason Gustafson  (was: jin xing)

> Keep track of connector and task status info, expose it via the REST API
> 
>
> Key: KAFKA-3093
> URL: https://issues.apache.org/jira/browse/KAFKA-3093
> Project: Kafka
>  Issue Type: Improvement
>  Components: copycat
>Reporter: jin xing
>Assignee: Jason Gustafson
>
> Relate to KAFKA-3054;
> We should keep track of the status of connector and task during their 
> startup, execution, and handle exceptions thrown by connector and task;
> Users should be able to fetch these informations by REST API and send some 
> necessary commands(reconfiguring, restarting, pausing, unpausing) to 
> connectors and tasks by REST API;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [POLL] Make next Kafka Release 0.10.0.0 instead of 0.9.1.0

2016-02-09 Thread Stevo Slavić
+1 0.10.0.0

On Tue, Feb 9, 2016, 19:08 Becket Qin  wrote:

> Hi All,
>
> Next Kafka release will have several significant important new
> feature/changes such as Kafka Stream, Message Format Change, Client
> Interceptors and several new consumer API changes, etc. We feel it is
> better to make next Kafka release 0.10.0.0 instead of 0.9.1.0.
>
> We would like to see what do people think of making the next release
> 0.10.0.0.
>
> Thanks,
>
> Jiangjie (Becket) Qin
>


[GitHub] kafka pull request: HOTFIX: open window segments in order, add seg...

2016-02-09 Thread ymatsuda
GitHub user ymatsuda opened a pull request:

https://github.com/apache/kafka/pull/891

HOTFIX: open window segments in order, add segment id check in getSegment

* During window store initialization, we have to open segments in the 
segment id order and update ```currentSegmentId```, otherwise cleanup won't 
work.
* ```getSegment()``` should not create a segment and clean up old segments 
if the segment id is greater than ```currentSegmentId```. Segment maintenance 
should be driven not by query but only by data insertion.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ymatsuda/kafka hotfix2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/891.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #891


commit b64b84cdeeda65284e3d99ab8278383d1a2aec72
Author: Yasuhiro Matsuda 
Date:   2016-02-09T18:05:38Z

HOTFIX: open window segments in order




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-3189: Kafka server returns UnknownServer...

2016-02-09 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/856


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3189) Kafka server returns UnknownServerException for inherited exceptions

2016-02-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15139336#comment-15139336
 ] 

ASF GitHub Bot commented on KAFKA-3189:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/856


> Kafka server returns UnknownServerException for inherited exceptions
> 
>
> Key: KAFKA-3189
> URL: https://issues.apache.org/jira/browse/KAFKA-3189
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Jiangjie Qin
>Assignee: Grant Henke
> Fix For: 0.9.1.0
>
>
> This issue was introduced in KAFKA-2929. The problem is that we are using 
> o.a.k.common.protocol.Errors.forException() while some exceptions thrown by 
> the broker are still using old scala exception. This cause 
> Errors.forException() always return UnknownServerException.
> InvalidMessageException is inherited from CorruptRecordException. But it 
> seems Errors.forException() needs the exception class to be the exact class, 
> so it does not map the subclass InvalidMessageException to the correct error 
> code. Instead it returns -1 which is UnknownServerException.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3189) Kafka server returns UnknownServerException for inherited exceptions

2016-02-09 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-3189:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 856
[https://github.com/apache/kafka/pull/856]

> Kafka server returns UnknownServerException for inherited exceptions
> 
>
> Key: KAFKA-3189
> URL: https://issues.apache.org/jira/browse/KAFKA-3189
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Jiangjie Qin
>Assignee: Grant Henke
> Fix For: 0.9.1.0
>
>
> This issue was introduced in KAFKA-2929. The problem is that we are using 
> o.a.k.common.protocol.Errors.forException() while some exceptions thrown by 
> the broker are still using old scala exception. This cause 
> Errors.forException() always return UnknownServerException.
> InvalidMessageException is inherited from CorruptRecordException. But it 
> seems Errors.forException() needs the exception class to be the exact class, 
> so it does not map the subclass InvalidMessageException to the correct error 
> code. Instead it returns -1 which is UnknownServerException.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [POLL] Make next Kafka Release 0.10.0.0 instead of 0.9.1.0

2016-02-09 Thread Neha Narkhede
+1 on 0.10.0.0

On Tue, Feb 9, 2016 at 10:09 AM, Stevo Slavić  wrote:

> +1 0.10.0.0
>
> On Tue, Feb 9, 2016, 19:08 Becket Qin  wrote:
>
> > Hi All,
> >
> > Next Kafka release will have several significant important new
> > feature/changes such as Kafka Stream, Message Format Change, Client
> > Interceptors and several new consumer API changes, etc. We feel it is
> > better to make next Kafka release 0.10.0.0 instead of 0.9.1.0.
> >
> > We would like to see what do people think of making the next release
> > 0.10.0.0.
> >
> > Thanks,
> >
> > Jiangjie (Becket) Qin
> >
>



-- 
Thanks,
Neha


Re: [POLL] Make next Kafka Release 0.10.0.0 instead of 0.9.1.0

2016-02-09 Thread Gwen Shapira
sure, why not :)

On Tue, Feb 9, 2016 at 10:07 AM, Becket Qin  wrote:

> Hi All,
>
> Next Kafka release will have several significant important new
> feature/changes such as Kafka Stream, Message Format Change, Client
> Interceptors and several new consumer API changes, etc. We feel it is
> better to make next Kafka release 0.10.0.0 instead of 0.9.1.0.
>
> We would like to see what do people think of making the next release
> 0.10.0.0.
>
> Thanks,
>
> Jiangjie (Becket) Qin
>


[jira] [Updated] (KAFKA-3216) "Modifying topics" section incorrectly says you can't change replication factor.

2016-02-09 Thread James Cheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Cheng updated KAFKA-3216:
---
Status: Patch Available  (was: Open)

> "Modifying topics" section incorrectly says you can't change replication 
> factor.
> 
>
> Key: KAFKA-3216
> URL: https://issues.apache.org/jira/browse/KAFKA-3216
> Project: Kafka
>  Issue Type: Bug
>Reporter: James Cheng
>Assignee: James Cheng
>
> The "Modifying Topics" section of the docs 
> (http://kafka.apache.org/documentation.html#basic_ops_modify_topic) says 
> {quote} Kafka does not currently support reducing the number of partitions 
> for a topic or changing the replication factor. {quote}
> But you *can* modify the replication factor. That second half of the sentence 
> should be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3216: "Modifying topics" section incorre...

2016-02-09 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/881


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-3216) "Modifying topics" section incorrectly says you can't change replication factor.

2016-02-09 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-3216:

   Resolution: Fixed
Fix Version/s: 0.9.1.0
   0.9.0.1
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 881
[https://github.com/apache/kafka/pull/881]

> "Modifying topics" section incorrectly says you can't change replication 
> factor.
> 
>
> Key: KAFKA-3216
> URL: https://issues.apache.org/jira/browse/KAFKA-3216
> Project: Kafka
>  Issue Type: Bug
>Reporter: James Cheng
>Assignee: James Cheng
> Fix For: 0.9.0.1, 0.9.1.0
>
>
> The "Modifying Topics" section of the docs 
> (http://kafka.apache.org/documentation.html#basic_ops_modify_topic) says 
> {quote} Kafka does not currently support reducing the number of partitions 
> for a topic or changing the replication factor. {quote}
> But you *can* modify the replication factor. That second half of the sentence 
> should be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: HOTFIX: poll even when all partitions are paus...

2016-02-09 Thread ymatsuda
GitHub user ymatsuda opened a pull request:

https://github.com/apache/kafka/pull/893

HOTFIX: poll even when all partitions are paused. handle concurrent cleanup

* We need to poll periodically even when all partitions are paused in order 
to respond to a possible rebalance promptly.
* There is a race condition when two (or more) threads try to clean up the 
same state directory. One of the thread fails with FileNotFoundException. Thus 
the new code simply catches it and ignore.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ymatsuda/kafka hotfix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/893.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #893


commit 8210133bf15e47b5ef379b1a15b89c5dd8da6386
Author: Yasuhiro Matsuda 
Date:   2016-02-09T23:06:23Z

HOTFIX: poll even when all partitions are paused. handle concurrent cleanup




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-3159) Kafka consumer 0.9.0.0 client poll is very CPU intensive under certain conditions

2016-02-09 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-3159:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 884
[https://github.com/apache/kafka/pull/884]

> Kafka consumer 0.9.0.0  client poll is very CPU intensive under certain 
> conditions
> --
>
> Key: KAFKA-3159
> URL: https://issues.apache.org/jira/browse/KAFKA-3159
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
> Environment: Linux, Oracle JVM 8.
>Reporter: Rajiv Kurian
>Assignee: Jason Gustafson
> Fix For: 0.9.0.1
>
> Attachments: Memory-profile-patched-client.png, Screen Shot 
> 2016-02-01 at 11.09.32 AM.png
>
>
> We are using the new kafka consumer with the following config (as logged by 
> kafka)
> metric.reporters = []
> metadata.max.age.ms = 30
> value.deserializer = class 
> org.apache.kafka.common.serialization.ByteArrayDeserializer
> group.id = myGroup.id
> partition.assignment.strategy = 
> [org.apache.kafka.clients.consumer.RangeAssignor]
> reconnect.backoff.ms = 50
> sasl.kerberos.ticket.renew.window.factor = 0.8
> max.partition.fetch.bytes = 2097152
> bootstrap.servers = [myBrokerList]
> retry.backoff.ms = 100
> sasl.kerberos.kinit.cmd = /usr/bin/kinit
> sasl.kerberos.service.name = null
> sasl.kerberos.ticket.renew.jitter = 0.05
> ssl.keystore.type = JKS
> ssl.trustmanager.algorithm = PKIX
> enable.auto.commit = false
> ssl.key.password = null
> fetch.max.wait.ms = 1000
> sasl.kerberos.min.time.before.relogin = 6
> connections.max.idle.ms = 54
> ssl.truststore.password = null
> session.timeout.ms = 3
> metrics.num.samples = 2
> client.id = 
> ssl.endpoint.identification.algorithm = null
> key.deserializer = class sf.kafka.VoidDeserializer
> ssl.protocol = TLS
> check.crcs = true
> request.timeout.ms = 4
> ssl.provider = null
> ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
> ssl.keystore.location = null
> heartbeat.interval.ms = 3000
> auto.commit.interval.ms = 5000
> receive.buffer.bytes = 32768
> ssl.cipher.suites = null
> ssl.truststore.type = JKS
> security.protocol = PLAINTEXT
> ssl.truststore.location = null
> ssl.keystore.password = null
> ssl.keymanager.algorithm = SunX509
> metrics.sample.window.ms = 3
> fetch.min.bytes = 512
> send.buffer.bytes = 131072
> auto.offset.reset = earliest
> We use the consumer.assign() feature to assign a list of partitions and call 
> poll in a loop.  We have the following setup:
> 1. The messages have no key and we use the byte array deserializer to get 
> byte arrays from the config.
> 2. The messages themselves are on an average about 75 bytes. We get this 
> number by dividing the Kafka broker bytes-in metric by the messages-in metric.
> 3. Each consumer is assigned about 64 partitions of the same topic spread 
> across three brokers.
> 4. We get very few messages per second maybe around 1-2 messages across all 
> partitions on a client right now.
> 5. We have no compression on the topic.
> Our run loop looks something like this
> while (isRunning()) {
> ConsumerRecords records = null;
> try {
> // Here timeout is about 10 seconds, so it is pretty big.
> records = consumer.poll(timeout);
> } catch (Exception e) {
>// This never hits for us
> logger.error("Exception polling Kafka ", e);
> records = null;
> }
> if (records != null) {
> for (ConsumerRecord record : records) {
>// The handler puts the byte array on a very fast ring buffer 
> so it barely takes any time.
> handler.handleMessage(ByteBuffer.wrap(record.value()));
> }
> }
> }
> With this setup our performance has taken a horrendous hit as soon as we 
> started this one thread that just polls Kafka in a loop.
> I profiled the application using Java Mission Control and have a few insights.
> 1. There doesn't seem to be a single hotspot. The consumer just ends up using 
> a lot of CPU for handing such a low number of messages. Our process was using 
> 16% CPU before we added a single consumer and it went to 25% and above after. 
> That's an increase of over 50% from a single consumer getting a single digit 
> number of small messages per second. Here is an attachment of the 

[GitHub] kafka pull request: KAFKA-3159: stale high watermark segment offse...

2016-02-09 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/884


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3159) Kafka consumer 0.9.0.0 client poll is very CPU intensive under certain conditions

2016-02-09 Thread Rajiv Kurian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15140049#comment-15140049
 ] 

Rajiv Kurian commented on KAFKA-3159:
-

I am running a patched broker with a consumer consuming partitions that have no 
messages and it seems to be working fine. So it looks good so far. I'll run it 
for longer and then finally run it with real messages to make sure there is no 
regression. Thanks every one!

> Kafka consumer 0.9.0.0  client poll is very CPU intensive under certain 
> conditions
> --
>
> Key: KAFKA-3159
> URL: https://issues.apache.org/jira/browse/KAFKA-3159
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
> Environment: Linux, Oracle JVM 8.
>Reporter: Rajiv Kurian
>Assignee: Jason Gustafson
> Fix For: 0.9.0.1
>
> Attachments: Memory-profile-patched-client.png, Screen Shot 
> 2016-02-01 at 11.09.32 AM.png
>
>
> We are using the new kafka consumer with the following config (as logged by 
> kafka)
> metric.reporters = []
> metadata.max.age.ms = 30
> value.deserializer = class 
> org.apache.kafka.common.serialization.ByteArrayDeserializer
> group.id = myGroup.id
> partition.assignment.strategy = 
> [org.apache.kafka.clients.consumer.RangeAssignor]
> reconnect.backoff.ms = 50
> sasl.kerberos.ticket.renew.window.factor = 0.8
> max.partition.fetch.bytes = 2097152
> bootstrap.servers = [myBrokerList]
> retry.backoff.ms = 100
> sasl.kerberos.kinit.cmd = /usr/bin/kinit
> sasl.kerberos.service.name = null
> sasl.kerberos.ticket.renew.jitter = 0.05
> ssl.keystore.type = JKS
> ssl.trustmanager.algorithm = PKIX
> enable.auto.commit = false
> ssl.key.password = null
> fetch.max.wait.ms = 1000
> sasl.kerberos.min.time.before.relogin = 6
> connections.max.idle.ms = 54
> ssl.truststore.password = null
> session.timeout.ms = 3
> metrics.num.samples = 2
> client.id = 
> ssl.endpoint.identification.algorithm = null
> key.deserializer = class sf.kafka.VoidDeserializer
> ssl.protocol = TLS
> check.crcs = true
> request.timeout.ms = 4
> ssl.provider = null
> ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
> ssl.keystore.location = null
> heartbeat.interval.ms = 3000
> auto.commit.interval.ms = 5000
> receive.buffer.bytes = 32768
> ssl.cipher.suites = null
> ssl.truststore.type = JKS
> security.protocol = PLAINTEXT
> ssl.truststore.location = null
> ssl.keystore.password = null
> ssl.keymanager.algorithm = SunX509
> metrics.sample.window.ms = 3
> fetch.min.bytes = 512
> send.buffer.bytes = 131072
> auto.offset.reset = earliest
> We use the consumer.assign() feature to assign a list of partitions and call 
> poll in a loop.  We have the following setup:
> 1. The messages have no key and we use the byte array deserializer to get 
> byte arrays from the config.
> 2. The messages themselves are on an average about 75 bytes. We get this 
> number by dividing the Kafka broker bytes-in metric by the messages-in metric.
> 3. Each consumer is assigned about 64 partitions of the same topic spread 
> across three brokers.
> 4. We get very few messages per second maybe around 1-2 messages across all 
> partitions on a client right now.
> 5. We have no compression on the topic.
> Our run loop looks something like this
> while (isRunning()) {
> ConsumerRecords records = null;
> try {
> // Here timeout is about 10 seconds, so it is pretty big.
> records = consumer.poll(timeout);
> } catch (Exception e) {
>// This never hits for us
> logger.error("Exception polling Kafka ", e);
> records = null;
> }
> if (records != null) {
> for (ConsumerRecord record : records) {
>// The handler puts the byte array on a very fast ring buffer 
> so it barely takes any time.
> handler.handleMessage(ByteBuffer.wrap(record.value()));
> }
> }
> }
> With this setup our performance has taken a horrendous hit as soon as we 
> started this one thread that just polls Kafka in a loop.
> I profiled the application using Java Mission Control and have a few insights.
> 1. There doesn't seem to be a single hotspot. The consumer just ends up using 
> a lot of CPU for handing such a low number of messages. Our process was using 
> 16% CPU before we added a single consumer and it went to 

[jira] [Commented] (KAFKA-3159) Kafka consumer 0.9.0.0 client poll is very CPU intensive under certain conditions

2016-02-09 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15140054#comment-15140054
 ] 

Ismael Juma commented on KAFKA-3159:


Good news [~ra...@signalfx.com], thanks for reporting back.

> Kafka consumer 0.9.0.0  client poll is very CPU intensive under certain 
> conditions
> --
>
> Key: KAFKA-3159
> URL: https://issues.apache.org/jira/browse/KAFKA-3159
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
> Environment: Linux, Oracle JVM 8.
>Reporter: Rajiv Kurian
>Assignee: Jason Gustafson
> Fix For: 0.9.0.1
>
> Attachments: Memory-profile-patched-client.png, Screen Shot 
> 2016-02-01 at 11.09.32 AM.png
>
>
> We are using the new kafka consumer with the following config (as logged by 
> kafka)
> metric.reporters = []
> metadata.max.age.ms = 30
> value.deserializer = class 
> org.apache.kafka.common.serialization.ByteArrayDeserializer
> group.id = myGroup.id
> partition.assignment.strategy = 
> [org.apache.kafka.clients.consumer.RangeAssignor]
> reconnect.backoff.ms = 50
> sasl.kerberos.ticket.renew.window.factor = 0.8
> max.partition.fetch.bytes = 2097152
> bootstrap.servers = [myBrokerList]
> retry.backoff.ms = 100
> sasl.kerberos.kinit.cmd = /usr/bin/kinit
> sasl.kerberos.service.name = null
> sasl.kerberos.ticket.renew.jitter = 0.05
> ssl.keystore.type = JKS
> ssl.trustmanager.algorithm = PKIX
> enable.auto.commit = false
> ssl.key.password = null
> fetch.max.wait.ms = 1000
> sasl.kerberos.min.time.before.relogin = 6
> connections.max.idle.ms = 54
> ssl.truststore.password = null
> session.timeout.ms = 3
> metrics.num.samples = 2
> client.id = 
> ssl.endpoint.identification.algorithm = null
> key.deserializer = class sf.kafka.VoidDeserializer
> ssl.protocol = TLS
> check.crcs = true
> request.timeout.ms = 4
> ssl.provider = null
> ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
> ssl.keystore.location = null
> heartbeat.interval.ms = 3000
> auto.commit.interval.ms = 5000
> receive.buffer.bytes = 32768
> ssl.cipher.suites = null
> ssl.truststore.type = JKS
> security.protocol = PLAINTEXT
> ssl.truststore.location = null
> ssl.keystore.password = null
> ssl.keymanager.algorithm = SunX509
> metrics.sample.window.ms = 3
> fetch.min.bytes = 512
> send.buffer.bytes = 131072
> auto.offset.reset = earliest
> We use the consumer.assign() feature to assign a list of partitions and call 
> poll in a loop.  We have the following setup:
> 1. The messages have no key and we use the byte array deserializer to get 
> byte arrays from the config.
> 2. The messages themselves are on an average about 75 bytes. We get this 
> number by dividing the Kafka broker bytes-in metric by the messages-in metric.
> 3. Each consumer is assigned about 64 partitions of the same topic spread 
> across three brokers.
> 4. We get very few messages per second maybe around 1-2 messages across all 
> partitions on a client right now.
> 5. We have no compression on the topic.
> Our run loop looks something like this
> while (isRunning()) {
> ConsumerRecords records = null;
> try {
> // Here timeout is about 10 seconds, so it is pretty big.
> records = consumer.poll(timeout);
> } catch (Exception e) {
>// This never hits for us
> logger.error("Exception polling Kafka ", e);
> records = null;
> }
> if (records != null) {
> for (ConsumerRecord record : records) {
>// The handler puts the byte array on a very fast ring buffer 
> so it barely takes any time.
> handler.handleMessage(ByteBuffer.wrap(record.value()));
> }
> }
> }
> With this setup our performance has taken a horrendous hit as soon as we 
> started this one thread that just polls Kafka in a loop.
> I profiled the application using Java Mission Control and have a few insights.
> 1. There doesn't seem to be a single hotspot. The consumer just ends up using 
> a lot of CPU for handing such a low number of messages. Our process was using 
> 16% CPU before we added a single consumer and it went to 25% and above after. 
> That's an increase of over 50% from a single consumer getting a single digit 
> number of small messages per second. Here is an attachment of the cpu usage 
> breakdown in the consumer (the 

Build failed in Jenkins: kafka_0.9.0_jdk7 #119

2016-02-09 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-3159; stale high watermark segment offset causes early fetch

--
[...truncated 2270 lines...]

kafka.server.OffsetCommitTest > testUpdateOffsets PASSED

kafka.server.OffsetCommitTest > testLargeMetadataPayload PASSED

kafka.server.OffsetCommitTest > testCommitAndFetchOffsets PASSED

kafka.server.OffsetCommitTest > testOffsetExpiration PASSED

kafka.server.OffsetCommitTest > testNonExistingTopicOffsetCommit PASSED

kafka.server.PlaintextReplicaFetchTest > testReplicaFetcherThread PASSED

kafka.server.ReplicaManagerTest > testHighWaterMarkDirectoryMapping PASSED

kafka.server.ReplicaManagerTest > testIllegalRequiredAcks PASSED

kafka.server.ReplicaManagerTest > testHighwaterMarkRelativeDirectoryMapping 
PASSED

kafka.server.AdvertiseBrokerTest > testBrokerAdvertiseToZK PASSED

kafka.server.HighwatermarkPersistenceTest > 
testHighWatermarkPersistenceMultiplePartitions PASSED

kafka.server.HighwatermarkPersistenceTest > 
testHighWatermarkPersistenceSinglePartition PASSED

kafka.server.ServerGenerateBrokerIdTest > testAutoGenerateBrokerId PASSED

kafka.server.ServerGenerateBrokerIdTest > testMultipleLogDirsMetaProps PASSED

kafka.server.ServerGenerateBrokerIdTest > testDisableGeneratedBrokerId PASSED

kafka.server.ServerGenerateBrokerIdTest > testUserConfigAndGeneratedBrokerId 
PASSED

kafka.server.ServerGenerateBrokerIdTest > 
testConsistentBrokerIdFromUserConfigAndMetaProps PASSED

kafka.server.SslReplicaFetchTest > testReplicaFetcherThread PASSED

kafka.server.KafkaConfigTest > testAdvertiseConfigured PASSED

kafka.server.KafkaConfigTest > testLogRetentionTimeHoursProvided PASSED

kafka.server.KafkaConfigTest > testLogRollTimeBothMsAndHoursProvided PASSED

kafka.server.KafkaConfigTest > testLogRetentionValid PASSED

kafka.server.KafkaConfigTest > testSpecificProperties PASSED

kafka.server.KafkaConfigTest > testDefaultCompressionType PASSED

kafka.server.KafkaConfigTest > testDuplicateListeners PASSED

kafka.server.KafkaConfigTest > testLogRetentionUnlimited PASSED

kafka.server.KafkaConfigTest > testLogRetentionTimeMsProvided PASSED

kafka.server.KafkaConfigTest > testLogRollTimeNoConfigProvided PASSED

kafka.server.KafkaConfigTest > testInvalidInterBrokerSecurityProtocol PASSED

kafka.server.KafkaConfigTest > testAdvertiseDefaults PASSED

kafka.server.KafkaConfigTest > testBadListenerProtocol PASSED

kafka.server.KafkaConfigTest > testListenerDefaults PASSED

kafka.server.KafkaConfigTest > testLogRetentionTimeBothMinutesAndHoursProvided 
PASSED

kafka.server.KafkaConfigTest > testUncleanElectionDisabled PASSED

kafka.server.KafkaConfigTest > testLogRetentionTimeNoConfigProvided PASSED

kafka.server.KafkaConfigTest > testFromPropsInvalid PASSED

kafka.server.KafkaConfigTest > testInvalidCompressionType PASSED

kafka.server.KafkaConfigTest > testAdvertiseHostNameDefault PASSED

kafka.server.KafkaConfigTest > testLogRetentionTimeMinutesProvided PASSED

kafka.server.KafkaConfigTest > testValidCompressionType PASSED

kafka.server.KafkaConfigTest > testUncleanElectionInvalid PASSED

kafka.server.KafkaConfigTest > testLogRetentionTimeBothMinutesAndMsProvided 
PASSED

kafka.server.KafkaConfigTest > testLogRollTimeMsProvided PASSED

kafka.server.KafkaConfigTest > testUncleanLeaderElectionDefault PASSED

kafka.server.KafkaConfigTest > testInvalidAdvertisedListenersProtocol PASSED

kafka.server.KafkaConfigTest > testUncleanElectionEnabled PASSED

kafka.server.KafkaConfigTest > testAdvertisePortDefault PASSED

kafka.server.KafkaConfigTest > testVersionConfiguration PASSED

kafka.server.KafkaConfigTest > testEqualAdvertisedListenersProtocol PASSED

kafka.server.LogRecoveryTest > testHWCheckpointNoFailuresMultipleLogSegments 
PASSED
:kafka_0.9.0_jdk7:core:test FAILED
:test_core_2_11_7 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':core:test'.
> Process 'Gradle Test Executor 2' finished with non-zero exit value 1

* Try:
Run with --info or --debug option to get more log output.

* Exception is:
org.gradle.api.tasks.TaskExecutionException: Execution failed for task 
':core:test'.
at 
org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeActions(ExecuteActionsTaskExecuter.java:69)
at 
org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:46)
at 
org.gradle.api.internal.tasks.execution.PostExecutionAnalysisTaskExecuter.execute(PostExecutionAnalysisTaskExecuter.java:35)
at 
org.gradle.api.internal.tasks.execution.SkipUpToDateTaskExecuter.execute(SkipUpToDateTaskExecuter.java:64)
at 
org.gradle.api.internal.tasks.execution.ValidatingTaskExecuter.execute(ValidatingTaskExecuter.java:58)
at 
org.gradle.api.internal.tasks.execution.SkipEmptySourceFilesTaskExecuter.execute(SkipEmptySourceFilesTaskExecuter.java:52)
at 

Re: [POLL] Make next Kafka Release 0.10.0.0 instead of 0.9.1.0

2016-02-09 Thread Ismael Juma
Hi Becket,

Good points. Some comments inline.

On Tue, Feb 9, 2016 at 8:45 PM, Becket Qin  wrote:

> Good point on the upgrade path. The release plan says that 0.10.0.0 is
> targeted in Q2, 2016. If that is accurate, there are 6-7 months between
> 0.9.0.0 and 0.10.0.0, i.e. there are still about five months before the
> next release.


I am not sure how accurate that release plan[1] is since it was last
updated in September 2015 (before 0.9.0.0 was released).
For comparison, Kafka 0.7.0 was released in January 2012, 0.8.0 in December
2013 (almost two years later), 0.9.0.0 in November 2015 (almost two years
later). It is likely that this will be the fastest major release bump by
far. It is not necessarily an issue, but we should take special care to
help users.

Our previous documentation only provides documentation on how
> to upgrade from the last official release. If the releasing interval is a
> concern, we can add documentation on how to upgrade from 0.8.x to 0.10.0.0
> directly. Alternatively, we can suggest user to first upgrade to 0.9.0.0
> then upgrade to 0.10.0.0.


These are the options indeed. If there is an upgrade path from 0.8.2.x to
0.10.0.0 that is simpler than having to upgrade to 0.9.0.x first, I think
it would be valuable to document it as our users would appreciate it. This
is obviously more work (we would want to write ducktape tests to ensure
that the documentation steps actually work as we expect) and it potentially
introduces new failure scenarios. I'd be interested in what others think.

Ismael

[1] https://cwiki.apache.org/confluence/display/KAFKA/Future+release+plan


[GitHub] kafka pull request: MINOR: add setUncaughtExceptionHandler to Kafk...

2016-02-09 Thread ymatsuda
GitHub user ymatsuda opened a pull request:

https://github.com/apache/kafka/pull/894

MINOR: add setUncaughtExceptionHandler to KafkaStreams



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ymatsuda/kafka minor

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/894.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #894


commit d6222a81c8b9d9b9e47c4f139d783f3141c0ae5c
Author: Yasuhiro Matsuda 
Date:   2016-02-09T23:52:39Z

MINOR: add setUncaughtExceptionHandler to KafkaStreams




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Jenkins build is back to normal : kafka-trunk-jdk8 #359

2016-02-09 Thread Apache Jenkins Server
See 



[jira] [Commented] (KAFKA-2750) Sender.java: handleProduceResponse does not check protocol version

2016-02-09 Thread Joel Koshy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15140105#comment-15140105
 ] 

Joel Koshy commented on KAFKA-2750:
---

[~felixgv] You mean retry with the older protocol? It does not do it currently 
- i.e., it always sends with the latest version. However, it should be possible 
after we support protocol version querying:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-35+-+Retrieving+protocol+version
 Until that is available we only recommend upgrading the clients _after_ the 
brokers.

> Sender.java: handleProduceResponse does not check protocol version
> --
>
> Key: KAFKA-2750
> URL: https://issues.apache.org/jira/browse/KAFKA-2750
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Geoff Anderson
>
> If you try run an 0.9 producer against 0.8.2.2 kafka broker, you get a fairly 
> cryptic error message:
> [2015-11-04 18:55:43,583] ERROR Uncaught error in kafka producer I/O thread:  
> (org.apache.kafka.clients.producer.internals.Sender)
> org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
> 'throttle_time_ms': java.nio.BufferUnderflowException
>   at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:71)
>   at 
> org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:462)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:279)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:216)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:141)
> Although we shouldn't expect an 0.9 producer to work against an 0.8.X broker 
> since the protocol version has been increased, perhaps the error could be 
> clearer.
> The cause seems to be that in Sender.java, handleProduceResponse does not to 
> have any mechanism for checking the protocol version of the received produce 
> response - it just calls a constructor which blindly tries to grab the 
> throttle time field which in this case fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3159) Kafka consumer 0.9.0.0 client poll is very CPU intensive under certain conditions

2016-02-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15139950#comment-15139950
 ] 

ASF GitHub Bot commented on KAFKA-3159:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/884


> Kafka consumer 0.9.0.0  client poll is very CPU intensive under certain 
> conditions
> --
>
> Key: KAFKA-3159
> URL: https://issues.apache.org/jira/browse/KAFKA-3159
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
> Environment: Linux, Oracle JVM 8.
>Reporter: Rajiv Kurian
>Assignee: Jason Gustafson
> Fix For: 0.9.0.1
>
> Attachments: Memory-profile-patched-client.png, Screen Shot 
> 2016-02-01 at 11.09.32 AM.png
>
>
> We are using the new kafka consumer with the following config (as logged by 
> kafka)
> metric.reporters = []
> metadata.max.age.ms = 30
> value.deserializer = class 
> org.apache.kafka.common.serialization.ByteArrayDeserializer
> group.id = myGroup.id
> partition.assignment.strategy = 
> [org.apache.kafka.clients.consumer.RangeAssignor]
> reconnect.backoff.ms = 50
> sasl.kerberos.ticket.renew.window.factor = 0.8
> max.partition.fetch.bytes = 2097152
> bootstrap.servers = [myBrokerList]
> retry.backoff.ms = 100
> sasl.kerberos.kinit.cmd = /usr/bin/kinit
> sasl.kerberos.service.name = null
> sasl.kerberos.ticket.renew.jitter = 0.05
> ssl.keystore.type = JKS
> ssl.trustmanager.algorithm = PKIX
> enable.auto.commit = false
> ssl.key.password = null
> fetch.max.wait.ms = 1000
> sasl.kerberos.min.time.before.relogin = 6
> connections.max.idle.ms = 54
> ssl.truststore.password = null
> session.timeout.ms = 3
> metrics.num.samples = 2
> client.id = 
> ssl.endpoint.identification.algorithm = null
> key.deserializer = class sf.kafka.VoidDeserializer
> ssl.protocol = TLS
> check.crcs = true
> request.timeout.ms = 4
> ssl.provider = null
> ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
> ssl.keystore.location = null
> heartbeat.interval.ms = 3000
> auto.commit.interval.ms = 5000
> receive.buffer.bytes = 32768
> ssl.cipher.suites = null
> ssl.truststore.type = JKS
> security.protocol = PLAINTEXT
> ssl.truststore.location = null
> ssl.keystore.password = null
> ssl.keymanager.algorithm = SunX509
> metrics.sample.window.ms = 3
> fetch.min.bytes = 512
> send.buffer.bytes = 131072
> auto.offset.reset = earliest
> We use the consumer.assign() feature to assign a list of partitions and call 
> poll in a loop.  We have the following setup:
> 1. The messages have no key and we use the byte array deserializer to get 
> byte arrays from the config.
> 2. The messages themselves are on an average about 75 bytes. We get this 
> number by dividing the Kafka broker bytes-in metric by the messages-in metric.
> 3. Each consumer is assigned about 64 partitions of the same topic spread 
> across three brokers.
> 4. We get very few messages per second maybe around 1-2 messages across all 
> partitions on a client right now.
> 5. We have no compression on the topic.
> Our run loop looks something like this
> while (isRunning()) {
> ConsumerRecords records = null;
> try {
> // Here timeout is about 10 seconds, so it is pretty big.
> records = consumer.poll(timeout);
> } catch (Exception e) {
>// This never hits for us
> logger.error("Exception polling Kafka ", e);
> records = null;
> }
> if (records != null) {
> for (ConsumerRecord record : records) {
>// The handler puts the byte array on a very fast ring buffer 
> so it barely takes any time.
> handler.handleMessage(ByteBuffer.wrap(record.value()));
> }
> }
> }
> With this setup our performance has taken a horrendous hit as soon as we 
> started this one thread that just polls Kafka in a loop.
> I profiled the application using Java Mission Control and have a few insights.
> 1. There doesn't seem to be a single hotspot. The consumer just ends up using 
> a lot of CPU for handing such a low number of messages. Our process was using 
> 16% CPU before we added a single consumer and it went to 25% and above after. 
> That's an increase of over 50% from a single consumer getting a single digit 
> number of small messages per second. Here is an attachment of the cpu 

Build failed in Jenkins: kafka-trunk-jdk7 #1032

2016-02-09 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-3159; stale high watermark segment offset causes early fetch

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H11 (docker Ubuntu ubuntu yahoo-not-h2) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 6d0dca7345d9e3c0b8924496a4632954ca1268e5 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 6d0dca7345d9e3c0b8924496a4632954ca1268e5
 > git rev-list b5e6b8671a5b6d97d5026261ae8d62b54f068e53 # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson66590423326696157.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 44.342 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson2092619230932633646.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.10/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:streams:examples:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk7:clients:compileJava
:jar_core_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '/home/jenkins/.gradle/caches/modules-2/files-2.1/net.jpountz.lz4/lz4/1.3.0/c708bb2590c0652a642236ef45d9f99ff842a2ce/lz4-1.3.0.jar'
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 53.765 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51


[jira] [Updated] (KAFKA-3221) kafka-acls.sh must verify if a user has sufficient privileges to perform acls CRUD

2016-02-09 Thread Ashish K Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish K Singh updated KAFKA-3221:
--
Status: Patch Available  (was: Open)

> kafka-acls.sh must verify if a user has sufficient privileges to perform acls 
> CRUD
> --
>
> Key: KAFKA-3221
> URL: https://issues.apache.org/jira/browse/KAFKA-3221
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
> Fix For: 0.9.0.1
>
>
> kafka-acls.sh provides an insecure entry point to Kafka's authorization. No 
> checks are performed or no user information is provided to authorizer to 
> validate a user, before the user performs CRUD of acls. This is a security 
> hole that must be addressed.
> As Kafka supports pluggable authorization, we need to look at this issue from 
> two aspects.
> 1. Default zk based authorizer, SimpleAclAuthorizer
> For SimpleAclAuthorizer, one could rely on Zookeeper authentication to check 
> if a user can really perform CRUD on Kafka acls. However, this check relies 
> on the assumption, which is usually true, that non-admin users won't have 
> access to Kafka service's user account.
> 2. Custom Authorizer
> Custom authorizer that gets executed in same address space as of Kafka 
> broker, does not have any way of determining which user is really trying to 
> perform CRUD of acls. For authorize requests, authorizer gets user 
> information, KafkaPrincipal, from session, however for CRUD of acls, i.e., 
> addAcls, removeAcls and getAcls, authorizer does not have requestor's info to 
> validate if it should allow or deny the request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2508) Replace UpdateMetadata{Request,Response} with org.apache.kafka.common.requests equivalent

2016-02-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15140276#comment-15140276
 ] 

ASF GitHub Bot commented on KAFKA-2508:
---

GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/896

KAFKA-2508: Replace UpdateMetadata{Request,Response} with o.a.k.c.req…

…uests equivalent

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka update-metadata

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/896.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #896


commit 537b8089d94326f937fc050f03a522e573cfa0cd
Author: Grant Henke 
Date:   2016-02-10T04:13:28Z

KAFKA-2508: Replace UpdateMetadata{Request,Response} with o.a.k.c.requests 
equivalent




> Replace UpdateMetadata{Request,Response} with 
> org.apache.kafka.common.requests equivalent
> -
>
> Key: KAFKA-2508
> URL: https://issues.apache.org/jira/browse/KAFKA-2508
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>Assignee: Grant Henke
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3141) kafka-acls.sh throws ArrayIndexOutOfBoundsException for an invalid authorizer-property

2016-02-09 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-3141:

   Resolution: Fixed
Fix Version/s: 0.9.1.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 806
[https://github.com/apache/kafka/pull/806]

> kafka-acls.sh throws ArrayIndexOutOfBoundsException for an invalid 
> authorizer-property
> --
>
> Key: KAFKA-3141
> URL: https://issues.apache.org/jira/browse/KAFKA-3141
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
> Fix For: 0.9.0.1, 0.9.1.0
>
>
> kafka-acls.sh throws ArrayIndexOutOfBoundsException for an invalid 
> authorizer-property. ST below.
> {code}
> Error while executing topic Acl command 1
> java.lang.ArrayIndexOutOfBoundsException: 1
>   at 
> kafka.admin.AclCommand$$anonfun$withAuthorizer$2.apply(AclCommand.scala:65)
>   at 
> kafka.admin.AclCommand$$anonfun$withAuthorizer$2.apply(AclCommand.scala:65)
>   at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>   at kafka.admin.AclCommand$.withAuthorizer(AclCommand.scala:65)
>   at kafka.admin.AclCommand$.addAcl(AclCommand.scala:78)
>   at kafka.admin.AclCommand$.main(AclCommand.scala:48)
>   at kafka.admin.AclCommand.main(AclCommand.scala)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3141) kafka-acls.sh throws ArrayIndexOutOfBoundsException for an invalid authorizer-property

2016-02-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15140299#comment-15140299
 ] 

ASF GitHub Bot commented on KAFKA-3141:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/806


> kafka-acls.sh throws ArrayIndexOutOfBoundsException for an invalid 
> authorizer-property
> --
>
> Key: KAFKA-3141
> URL: https://issues.apache.org/jira/browse/KAFKA-3141
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
> Fix For: 0.9.0.1, 0.9.1.0
>
>
> kafka-acls.sh throws ArrayIndexOutOfBoundsException for an invalid 
> authorizer-property. ST below.
> {code}
> Error while executing topic Acl command 1
> java.lang.ArrayIndexOutOfBoundsException: 1
>   at 
> kafka.admin.AclCommand$$anonfun$withAuthorizer$2.apply(AclCommand.scala:65)
>   at 
> kafka.admin.AclCommand$$anonfun$withAuthorizer$2.apply(AclCommand.scala:65)
>   at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>   at kafka.admin.AclCommand$.withAuthorizer(AclCommand.scala:65)
>   at kafka.admin.AclCommand$.addAcl(AclCommand.scala:78)
>   at kafka.admin.AclCommand$.main(AclCommand.scala:48)
>   at kafka.admin.AclCommand.main(AclCommand.scala)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3162: Added producer and consumer interc...

2016-02-09 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/854


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-2508: Replace UpdateMetadata{Request,Res...

2016-02-09 Thread granthenke
GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/896

KAFKA-2508: Replace UpdateMetadata{Request,Response} with o.a.k.c.req…

…uests equivalent

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka update-metadata

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/896.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #896


commit 537b8089d94326f937fc050f03a522e573cfa0cd
Author: Grant Henke 
Date:   2016-02-10T04:13:28Z

KAFKA-2508: Replace UpdateMetadata{Request,Response} with o.a.k.c.requests 
equivalent




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk7 #1033

2016-02-09 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-3141: Add checks to catch invalid authorizer porperties

[cshapi] KAFKA-3162: Added producer and consumer interceptors

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu3 (Ubuntu ubuntu legacy-ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision d5b43b19bb06e9cdc606312c8bcf87ed267daf44 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f d5b43b19bb06e9cdc606312c8bcf87ed267daf44
 > git rev-list 6d0dca7345d9e3c0b8924496a4632954ca1268e5 # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson7516125697815070850.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 29.089 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson7643548444719686796.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.10/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:streams:examples:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk7:clients:compileJava
:jar_core_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '/home/jenkins/.gradle/caches/modules-2/files-2.1/net.jpountz.lz4/lz4/1.3.0/c708bb2590c0652a642236ef45d9f99ff842a2ce/lz4-1.3.0.jar'
>  to cache fileHashes.bin 
> (/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk7/.gradle/2.10/taskArtifacts/fileHashes.bin).

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 14.822 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51


[jira] [Updated] (KAFKA-2923) Improve 0.9.0 Upgrade Documents

2016-02-09 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2923:
---
Fix Version/s: (was: 0.9.0.1)
   0.9.1.0

> Improve 0.9.0 Upgrade Documents 
> 
>
> Key: KAFKA-2923
> URL: https://issues.apache.org/jira/browse/KAFKA-2923
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Grant Henke
>  Labels: newbie
> Fix For: 0.9.1.0
>
>
> A couple of places we can improve the upgrade docs:
> 1) Explanation about replica.lag.time.max.ms and how it relates to the old 
> configs.
> 2) Default quota configs.
> 3) Client-server compatibility: old clients working with new servers and new 
> clients working with old servers?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3221) kafka-acls.sh must verify if a user has sufficient privileges to perform acls CRUD

2016-02-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15140234#comment-15140234
 ] 

ASF GitHub Bot commented on KAFKA-3221:
---

GitHub user SinghAsDev opened a pull request:

https://github.com/apache/kafka/pull/895

KAFKA-3221: Pass requesting user information to authorizer to enable …

…user authorization before modifying acls.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/SinghAsDev/kafka KAFKA-3221

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/895.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #895


commit 0378232be9997be41c6417131549863bd7a972c6
Author: Ashish Singh 
Date:   2016-02-10T02:46:02Z

KAFKA-3221: Pass requesting user information to authorizer to enable user 
authorization before modifying acls.




> kafka-acls.sh must verify if a user has sufficient privileges to perform acls 
> CRUD
> --
>
> Key: KAFKA-3221
> URL: https://issues.apache.org/jira/browse/KAFKA-3221
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
> Fix For: 0.9.0.1
>
>
> kafka-acls.sh provides an insecure entry point to Kafka's authorization. No 
> checks are performed or no user information is provided to authorizer to 
> validate a user, before the user performs CRUD of acls. This is a security 
> hole that must be addressed.
> As Kafka supports pluggable authorization, we need to look at this issue from 
> two aspects.
> 1. Default zk based authorizer, SimpleAclAuthorizer
> For SimpleAclAuthorizer, one could rely on Zookeeper authentication to check 
> if a user can really perform CRUD on Kafka acls. However, this check relies 
> on the assumption, which is usually true, that non-admin users won't have 
> access to Kafka service's user account.
> 2. Custom Authorizer
> Custom authorizer that gets executed in same address space as of Kafka 
> broker, does not have any way of determining which user is really trying to 
> perform CRUD of acls. For authorize requests, authorizer gets user 
> information, KafkaPrincipal, from session, however for CRUD of acls, i.e., 
> addAcls, removeAcls and getAcls, authorizer does not have requestor's info to 
> validate if it should allow or deny the request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3221: Pass requesting user information t...

2016-02-09 Thread SinghAsDev
GitHub user SinghAsDev opened a pull request:

https://github.com/apache/kafka/pull/895

KAFKA-3221: Pass requesting user information to authorizer to enable …

…user authorization before modifying acls.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/SinghAsDev/kafka KAFKA-3221

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/895.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #895


commit 0378232be9997be41c6417131549863bd7a972c6
Author: Ashish Singh 
Date:   2016-02-10T02:46:02Z

KAFKA-3221: Pass requesting user information to authorizer to enable user 
authorization before modifying acls.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-3221) kafka-acls.sh must verify if a user has sufficient privileges to perform acls CRUD

2016-02-09 Thread Ashish K Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish K Singh updated KAFKA-3221:
--
Fix Version/s: 0.9.0.1

> kafka-acls.sh must verify if a user has sufficient privileges to perform acls 
> CRUD
> --
>
> Key: KAFKA-3221
> URL: https://issues.apache.org/jira/browse/KAFKA-3221
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
> Fix For: 0.9.0.1
>
>
> kafka-acls.sh provides an insecure entry point to Kafka's authorization. No 
> checks are performed or no user information is provided to authorizer to 
> validate a user, before the user performs CRUD of acls. This is a security 
> hole that must be addressed.
> As Kafka supports pluggable authorization, we need to look at this issue from 
> two aspects.
> 1. Default zk based authorizer, SimpleAclAuthorizer
> For SimpleAclAuthorizer, one could rely on Zookeeper authentication to check 
> if a user can really perform CRUD on Kafka acls. However, this check relies 
> on the assumption, which is usually true, that non-admin users won't have 
> access to Kafka service's user account.
> 2. Custom Authorizer
> Custom authorizer that gets executed in same address space as of Kafka 
> broker, does not have any way of determining which user is really trying to 
> perform CRUD of acls. For authorize requests, authorizer gets user 
> information, KafkaPrincipal, from session, however for CRUD of acls, i.e., 
> addAcls, removeAcls and getAcls, authorizer does not have requestor's info to 
> validate if it should allow or deny the request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2508) Replace UpdateMetadata{Request,Response} with org.apache.kafka.common.requests equivalent

2016-02-09 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-2508:
---
Status: Patch Available  (was: Open)

> Replace UpdateMetadata{Request,Response} with 
> org.apache.kafka.common.requests equivalent
> -
>
> Key: KAFKA-2508
> URL: https://issues.apache.org/jira/browse/KAFKA-2508
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>Assignee: Grant Henke
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3141: Add checks to catch invalid author...

2016-02-09 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/806


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-3162) KIP-42: Add Producer and Consumer Interceptors

2016-02-09 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-3162:

   Resolution: Fixed
Fix Version/s: 0.9.1.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 854
[https://github.com/apache/kafka/pull/854]

> KIP-42: Add Producer and Consumer Interceptors
> --
>
> Key: KAFKA-3162
> URL: https://issues.apache.org/jira/browse/KAFKA-3162
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Anna Povzner
>Assignee: Anna Povzner
> Fix For: 0.9.1.0
>
>
> This JIRA is for main part of KIP-42 implementation, which includes:
> 1. Add ProducerInterceptor interface and call its callbacks from appropriate 
> places in Kafka Producer.
> 2. Add ConsumerInterceptor interface and call its callbacks from appropriate 
> places in Kafka Consumer.
> 3. Add unit tests for interceptor changes
> 4. Add integration test for both mutable consumer and producer interceptors 
> (running at the same time).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3162) KIP-42: Add Producer and Consumer Interceptors

2016-02-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15140324#comment-15140324
 ] 

ASF GitHub Bot commented on KAFKA-3162:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/854


> KIP-42: Add Producer and Consumer Interceptors
> --
>
> Key: KAFKA-3162
> URL: https://issues.apache.org/jira/browse/KAFKA-3162
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Anna Povzner
>Assignee: Anna Povzner
> Fix For: 0.9.1.0
>
>
> This JIRA is for main part of KIP-42 implementation, which includes:
> 1. Add ProducerInterceptor interface and call its callbacks from appropriate 
> places in Kafka Producer.
> 2. Add ConsumerInterceptor interface and call its callbacks from appropriate 
> places in Kafka Consumer.
> 3. Add unit tests for interceptor changes
> 4. Add integration test for both mutable consumer and producer interceptors 
> (running at the same time).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #360

2016-02-09 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-3141: Add checks to catch invalid authorizer porperties

[cshapi] KAFKA-3162: Added producer and consumer interceptors

--
[...truncated 2822 lines...]

kafka.log.FileMessageSetTest > testWrittenEqualsRead PASSED

kafka.log.FileMessageSetTest > testWriteTo PASSED

kafka.log.FileMessageSetTest > testPreallocateFalse PASSED

kafka.log.FileMessageSetTest > testPreallocateClearShutdown PASSED

kafka.log.FileMessageSetTest > testSearch PASSED

kafka.log.FileMessageSetTest > testSizeInBytes PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.log.OffsetIndexTest > lookupExtremeCases PASSED

kafka.log.OffsetIndexTest > appendTooMany PASSED

kafka.log.OffsetIndexTest > randomLookupTest PASSED

kafka.log.OffsetIndexTest > testReopen PASSED

kafka.log.OffsetIndexTest > appendOutOfOrder PASSED

kafka.log.OffsetIndexTest > truncate PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[0] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.LogManagerTest > testCleanupSegmentsToMaintainSize PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithRelativeDirectory 
PASSED

kafka.log.LogManagerTest > testGetNonExistentLog PASSED

kafka.log.LogManagerTest > testTwoLogManagersUsingSameDirFails PASSED

kafka.log.LogManagerTest > testLeastLoadedAssignment PASSED

kafka.log.LogManagerTest > testCleanupExpiredSegments PASSED

kafka.log.LogManagerTest > testCheckpointRecoveryPoints PASSED

kafka.log.LogManagerTest > testTimeBasedFlush PASSED

kafka.log.LogManagerTest > testCreateLog PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithTrailingSlash PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] PASSED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled PASSED

kafka.security.auth.ZkAuthorizationTest > testZkUtils PASSED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testZkMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testChroot PASSED

kafka.security.auth.ZkAuthorizationTest > testDelete PASSED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive PASSED

kafka.security.auth.AclTest > testAclJsonConversion PASSED

kafka.security.auth.ResourceTypeTest > testFromString PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAllowAllAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFound PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache PASSED

kafka.security.auth.PermissionTypeTest > testFromString PASSED

kafka.security.auth.OperationTest > testFromString PASSED

kafka.admin.AdminTest > testBasicPreferredReplicaElection PASSED

kafka.admin.AdminTest > testPreferredReplicaJsonData PASSED

kafka.admin.AdminTest > testReassigningNonExistingPartition PASSED

kafka.admin.AdminTest > testBootstrapClientIdConfig PASSED


Jenkins build is back to normal : kafka_0.9.0_jdk7 #120

2016-02-09 Thread Apache Jenkins Server
See 



[GitHub] kafka pull request: HOTFIX: fix NPE after standby task reassignmen...

2016-02-09 Thread ymatsuda
GitHub user ymatsuda opened a pull request:

https://github.com/apache/kafka/pull/889

HOTFIX: fix NPE after standby task reassignment

Buffered records of change logs must be cleared upon reassignment of 
standby tasks. 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ymatsuda/kafka hotfix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/889.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #889


commit ab86801a478cf0c971e90a9d6fa71f25786a4c4e
Author: Yasuhiro Matsuda 
Date:   2016-02-09T17:34:27Z

HOTFIX: fix NPE after standby task reassignment




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3217) Unit tests which dont close producers auto-create topics in Kafka brokers of other tests when port is reused

2016-02-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15139288#comment-15139288
 ] 

ASF GitHub Bot commented on KAFKA-3217:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/882


> Unit tests which dont close producers auto-create topics in Kafka brokers of 
> other tests when port is reused
> 
>
> Key: KAFKA-3217
> URL: https://issues.apache.org/jira/browse/KAFKA-3217
> Project: Kafka
>  Issue Type: Bug
>  Components: unit tests
>Affects Versions: 0.9.0.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.9.1.0
>
>
> Consumer tests occasionally fail the exception:
> {quote}
> kafka.common.TopicExistsException: Topic "topic" already exists.
> at 
> kafka.admin.AdminUtils$.createOrUpdateTopicPartitionAssignmentPathInZK(AdminUtils.scala:261)
> at kafka.admin.AdminUtils$.createTopic(AdminUtils.scala:245)
> at kafka.utils.TestUtils$.createTopic(TestUtils.scala:237)
> at kafka.api.BaseConsumerTest.setUp(BaseConsumerTest.scala:65)
> {quote}
> Recreated this failure with some additional logging and it turns out that the 
> failure is because a few tests which create a topic named "topic" close their 
> Kafka server, but not the producer. When the ephemeral port used by the 
> closed Kafka server gets reused in another Kafka server in a subsequent test, 
> the producer retries of the previous test cause "topic" to be recreated using 
> auto-create in the new Kafka server of the subsequent test.  This results in 
> an error in the consumer tests occasionally when the topic is auto-created 
> before the test attempts to create it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3217) Unit tests which dont close producers auto-create topics in Kafka brokers of other tests when port is reused

2016-02-09 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-3217:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 882
[https://github.com/apache/kafka/pull/882]

> Unit tests which dont close producers auto-create topics in Kafka brokers of 
> other tests when port is reused
> 
>
> Key: KAFKA-3217
> URL: https://issues.apache.org/jira/browse/KAFKA-3217
> Project: Kafka
>  Issue Type: Bug
>  Components: unit tests
>Affects Versions: 0.9.0.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.9.1.0
>
>
> Consumer tests occasionally fail the exception:
> {quote}
> kafka.common.TopicExistsException: Topic "topic" already exists.
> at 
> kafka.admin.AdminUtils$.createOrUpdateTopicPartitionAssignmentPathInZK(AdminUtils.scala:261)
> at kafka.admin.AdminUtils$.createTopic(AdminUtils.scala:245)
> at kafka.utils.TestUtils$.createTopic(TestUtils.scala:237)
> at kafka.api.BaseConsumerTest.setUp(BaseConsumerTest.scala:65)
> {quote}
> Recreated this failure with some additional logging and it turns out that the 
> failure is because a few tests which create a topic named "topic" close their 
> Kafka server, but not the producer. When the ephemeral port used by the 
> closed Kafka server gets reused in another Kafka server in a subsequent test, 
> the producer retries of the previous test cause "topic" to be recreated using 
> auto-create in the new Kafka server of the subsequent test.  This results in 
> an error in the consumer tests occasionally when the topic is auto-created 
> before the test attempts to create it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3217: Close producers in unit tests

2016-02-09 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/882


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk7 #1029

2016-02-09 Thread Apache Jenkins Server
See 

Changes:

[me] MINOR: Use explicit type in AclCommand

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H11 (docker Ubuntu ubuntu yahoo-not-h2) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 9cac38c0216879776b9eab728235e35118e9026e 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 9cac38c0216879776b9eab728235e35118e9026e
 > git rev-list 0eaede4dc95846e2b8f7452f41c58c0122e7a563 # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson3122130646693301913.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 54.383 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson6338017528747184179.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.10/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:streams:examples:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk7:clients:compileJava
:jar_core_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '/home/jenkins/.gradle/caches/modules-2/files-2.1/net.jpountz.lz4/lz4/1.3.0/c708bb2590c0652a642236ef45d9f99ff842a2ce/lz4-1.3.0.jar'
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 58.445 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51


[GitHub] kafka pull request: HOTFIX: fix NPE after standby task reassignmen...

2016-02-09 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/889


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: MINOR: Stabilize transient replication test fa...

2016-02-09 Thread granders
GitHub user granders opened a pull request:

https://github.com/apache/kafka/pull/890

MINOR: Stabilize transient replication test failures in 0.9.0

- ported timeout values in `produce_consume_validate.py` from trunk to 0.9.0
- ported `producer_throughput_value` in `replication_test.py` from trunk to 
0.9.0
- fixed `min.insync.replicas` config, which due to an error, was not 
getting applied to its intended topics

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/confluentinc/kafka 
stabilize-transient-failures-0.9.0

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/890.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #890


commit 5ce08dbfacdea41b8eb7844004407dd0c5776625
Author: Geoff Anderson 
Date:   2016-02-08T19:47:06Z

Update timeout on consumption check to match trunk

commit 99c62e46fef6771909e7fdf6d1c839fb501e3996
Author: Geoff Anderson 
Date:   2016-02-08T22:07:59Z

Match timeout in trunk in producer check

commit b65b017b809363401077ab10b949f1fd6f68698a
Author: Geoff Anderson 
Date:   2016-02-09T02:55:20Z

Fixed invalid configuration of min.insync.replicas




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---