[jira] [Created] (KAFKA-2700) delete topic should remove the corresponding ACL and configs

2015-10-27 Thread Jun Rao (JIRA)
Jun Rao created KAFKA-2700:
--

 Summary: delete topic should remove the corresponding ACL and 
configs
 Key: KAFKA-2700
 URL: https://issues.apache.org/jira/browse/KAFKA-2700
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.9.0.0
Reporter: Jun Rao


After a topic is successfully deleted, we should also remove any ACL, configs 
and perhaps committed offsets associated with topic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2699) Add test to validate times in RequestMetrics

2015-10-27 Thread Aditya Auradkar (JIRA)
Aditya Auradkar created KAFKA-2699:
--

 Summary: Add test to validate times in RequestMetrics
 Key: KAFKA-2699
 URL: https://issues.apache.org/jira/browse/KAFKA-2699
 Project: Kafka
  Issue Type: Test
Reporter: Aditya Auradkar
Assignee: Aditya Auradkar


No tests exist to validate the reported times in RequestMetrics. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #727

2015-10-27 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] HOTFIX: Rename WakeupException in MirrorMaker

--
[...truncated 1702 lines...]

kafka.coordinator.MemberMetadataTest > testVoteRaisesOnNoSupportedProtocols 
PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[0] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.log.LogManagerTest > testCleanupSegmentsToMaintainSize PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithRelativeDirectory 
PASSED

kafka.log.LogManagerTest > testGetNonExistentLog PASSED

kafka.log.LogManagerTest > testTwoLogManagersUsingSameDirFails PASSED

kafka.log.LogManagerTest > testLeastLoadedAssignment PASSED

kafka.log.LogManagerTest > testCleanupExpiredSegments PASSED

kafka.log.LogManagerTest > testCheckpointRecoveryPoints PASSED

kafka.log.LogManagerTest > testTimeBasedFlush PASSED

kafka.log.LogManagerTest > testCreateLog PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithTrailingSlash PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.FileMessageSetTest > testTruncate PASSED

kafka.log.FileMessageSetTest > testIterationOverPartialAndTruncation PASSED

kafka.log.FileMessageSetTest > testRead PASSED

kafka.log.FileMessageSetTest > testFileSize PASSED

kafka.log.FileMessageSetTest > testIteratorWithLimits PASSED

kafka.log.FileMessageSetTest > testPreallocateTrue PASSED

kafka.log.FileMessageSetTest > testIteratorIsConsistent PASSED

kafka.log.FileMessageSetTest > testIterationDoesntChangePosition PASSED

kafka.log.FileMessageSetTest > testWrittenEqualsRead PASSED

kafka.log.FileMessageSetTest > testWriteTo PASSED

kafka.log.FileMessageSetTest > testPreallocateFalse PASSED

kafka.log.FileMessageSetTest > testPreallocateClearShutdown PASSED

kafka.log.FileMessageSetTest > testSearch PASSED

kafka.log.FileMessageSetTest > testSizeInBytes PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingTopic PASSED

kafka.log.LogTest > testIndexRebuild PASSED

kafka.log.LogTest > testLogRolls PASSED

kafka.log.LogTest > testMessageSizeCheck PASSED

kafka.log.LogTest > testAsyncDelete PASSED

kafka.log.LogTest > testReadOutOfRange PASSED

kafka.log.LogTest > testReadAtLogGap PASSED

kafka.log.LogTest > testTimeBasedLogRoll PASSED

kafka.log.LogTest > testLoadEmptyLog PASSED

kafka.log.LogTest > testMessageSetSizeCheck PASSED

kafka.log.LogTest > testIndexResizingAtTruncation PASSED

kafka.log.LogTest > testCompactedTopicConstraints PASSED

kafka.log.LogTest > testThatGarbageCollectingSegmentsDoesntChangeOffset P

Build failed in Jenkins: kafka-trunk-jdk8 #65

2015-10-27 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] HOTFIX: Rename WakeupException in MirrorMaker

--
[...truncated 364 lines...]
:kafka-trunk-jdk8:log4j-appender:classes UP-TO-DATE
:kafka-trunk-jdk8:log4j-appender:jar UP-TO-DATE
:kafka-trunk-jdk8:core:compileJava UP-TO-DATE
:kafka-trunk-jdk8:core:compileScala UP-TO-DATE
:kafka-trunk-jdk8:core:processResources UP-TO-DATE
:kafka-trunk-jdk8:core:classes UP-TO-DATE
:kafka-trunk-jdk8:log4j-appender:javadoc
cache fileHashes.bin 
(
 is corrupt. Discarding.
:kafka-trunk-jdk8:core:javadoc
:kafka-trunk-jdk8:core:javadocJar
:kafka-trunk-jdk8:core:scaladoc
[ant:scaladoc] Element 
' 
does not exist.
[ant:scaladoc] 
:293:
 warning: a pure expression does nothing in statement position; you may be 
omitting necessary parentheses
[ant:scaladoc] ControllerStats.uncleanLeaderElectionRate
[ant:scaladoc] ^
[ant:scaladoc] 
:294:
 warning: a pure expression does nothing in statement position; you may be 
omitting necessary parentheses
[ant:scaladoc] ControllerStats.leaderElectionTimer
[ant:scaladoc] ^
[ant:scaladoc] warning: there were 15 feature warning(s); re-run with -feature 
for details
[ant:scaladoc] 
:72:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#offer".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:32:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#offer".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:137:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#poll".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:120:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#poll".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:97:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#put".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:152:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#take".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 9 warnings found
:kafka-trunk-jdk8:core:scaladocJar
:kafka-trunk-jdk8:core:docsJar
:docsJar_2_11_7
Building project 'core' with Scala version 2.11.7
:kafka-trunk-jdk8:clients:compileJava UP-TO-DATE
:kafka-trunk-jdk8:clients:processResources UP-TO-DATE
:kafka-trunk-jdk8:clients:classes UP-TO-DATE
:kafka-trunk-jdk8:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk8:clients:createVersionFile
:kafka-trunk-jdk8:clients:jar UP-TO-DATE
:kafka-trunk-jdk8:clients:javadoc UP-TO-DATE
:kafka-trunk-jdk8:log4j-appender:compileJava UP-TO-DATE
:kafka-trunk-jdk8:log4j-appender:processResources UP-TO-DATE
:kafka-trunk-jdk8:log4j-appender:classes UP-TO-DATE
:kafka-trunk-jdk8:log4j-appender:jar UP-TO-DATE
:kafka-trunk-jdk8:core:compileJava UP-TO-DATE
:kafka-trunk-jdk8:core:compileScalaJava HotSpot(TM) 64-Bit Server VM warning: 
ignoring option MaxPermSize=512m; support was removed in 8.0

:78:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^
:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,

  ^


[GitHub] kafka pull request: HOTFIX: Rename WakeupException in MirrorMaker

2015-10-27 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/375


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: HOTFIX: Rename WakeupException in MirrorMaker

2015-10-27 Thread guozhangwang
GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/375

HOTFIX: Rename WakeupException in MirrorMaker



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka HFWakeup

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/375.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #375


commit f79971dda5b2ea038d39d1d15671cd0389e9f4d8
Author: Guozhang Wang 
Date:   2015-10-28T02:29:02Z

hotfix: WakeupException in MirrorMaker




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk7 #726

2015-10-27 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2683: ensure wakeup exceptions raised to user

[wangguoz] MINOR: Expose ReplicaManager gauges

--
Started by an SCM change
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on jenkins-ubuntu-1404-4gb-819 (jenkins-cloud-4GB cloud-slave 
Ubuntu ubuntu) in workspace 
Cloning the remote Git repository
Cloning repository https://git-wip-us.apache.org/repos/asf/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 7a36d36478635ae16b64c6410b88c92b45d5f129 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 7a36d36478635ae16b64c6410b88c92b45d5f129
 > git rev-list 13c3e049fbf22522c90c2a0b4b4f680b974d9bea # timeout=10
Unpacking http://services.gradle.org/distributions/gradle-2.1-bin.zip to 
/jenkins/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1 on 
jenkins-ubuntu-1404-4gb-819
Setting 
GRADLE_2_1_HOME=/jenkins/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting JDK_1_7_LATEST__HOME=/home/jenkins/tools/java/latest1.7
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson8977904777471738178.sh
+ /jenkins/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.1/userguide/gradle_daemon.html.
Download 
https://repo1.maven.org/maven2/org/ajoberstar/grgit/0.2.3/grgit-0.2.3.pom
Download 
https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit/3.3.0.201403021825-r/org.eclipse.jgit-3.3.0.201403021825-r.pom
Download 
https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit-parent/3.3.0.201403021825-r/org.eclipse.jgit-parent-3.3.0.201403021825-r.pom
Download 
https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit.ui/3.3.0.201403021825-r/org.eclipse.jgit.ui-3.3.0.201403021825-r.pom
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.jsch/0.0.7/jsch.agentproxy.jsch-0.0.7.pom
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy/0.0.7/jsch.agentproxy-0.0.7.pom
Download 
https://repo1.maven.org/maven2/org/sonatype/oss/oss-parent/6/oss-parent-6.pom
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.pageant/0.0.7/jsch.agentproxy.pageant-0.0.7.pom
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.sshagent/0.0.7/jsch.agentproxy.sshagent-0.0.7.pom
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.usocket-jna/0.0.7/jsch.agentproxy.usocket-jna-0.0.7.pom
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.usocket-nc/0.0.7/jsch.agentproxy.usocket-nc-0.0.7.pom
Download 
https://repo1.maven.org/maven2/org/slf4j/slf4j-api/1.7.6/slf4j-api-1.7.6.pom
Download 
https://repo1.maven.org/maven2/org/slf4j/slf4j-parent/1.7.6/slf4j-parent-1.7.6.pom
Download https://repo1.maven.org/maven2/com/jcraft/jsch/0.1.46/jsch-0.1.46.pom
Download 
https://repo1.maven.org/maven2/com/googlecode/javaewah/JavaEWAH/0.7.9/JavaEWAH-0.7.9.pom
Download 
https://repo1.maven.org/maven2/org/apache/httpcomponents/httpclient/4.1.3/httpclient-4.1.3.pom
Download 
https://repo1.maven.org/maven2/org/apache/httpcomponents/httpcomponents-client/4.1.3/httpcomponents-client-4.1.3.pom
Download 
https://repo1.maven.org/maven2/org/apache/httpcomponents/project/5/project-5.pom
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.core/0.0.7/jsch.agentproxy.core-0.0.7.pom
Download https://repo1.maven.org/maven2/net/java/dev/jna/jna/3.4.0/jna-3.4.0.pom
Download 
https://repo1.maven.org/maven2/net/java/dev/jna/platform/3.4.0/platform-3.4.0.pom
Download 
https://repo1.maven.org/maven2/org/apache/httpcomponents/httpcore/4.1.4/httpcore-4.1.4.pom
Download 
https://repo1.maven.org/maven2/org/apache/httpcomponents/httpcomponents-core/4.1.4/httpcomponents-core-4.1.4.pom
Download 
https://rep

Build failed in Jenkins: kafka-trunk-jdk8 #64

2015-10-27 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2683: ensure wakeup exceptions raised to user

[wangguoz] MINOR: Expose ReplicaManager gauges

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H11 (Ubuntu ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 7a36d36478635ae16b64c6410b88c92b45d5f129 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 7a36d36478635ae16b64c6410b88c92b45d5f129
 > git rev-list 13c3e049fbf22522c90c2a0b4b4f680b974d9bea # timeout=10
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson2798835618491449265.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.1/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:downloadWrapper UP-TO-DATE

BUILD SUCCESSFUL

Total time: 12.883 secs
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson1936841379695496482.sh
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll docsJarAll 
testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.8/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:clean UP-TO-DATE
:clients:clean
:contrib:clean UP-TO-DATE
:copycat:clean UP-TO-DATE
:core:clean
:examples:clean
:log4j-appender:clean
:streams:clean
:tools:clean
:contrib:hadoop-consumer:clean
:contrib:hadoop-producer:clean
:copycat:api:clean
:copycat:file:clean
:copycat:json:clean
:copycat:runtime:clean
:jar_core_2_10_5
Building project 'core' with Scala version 2.10.5
:kafka-trunk-jdk8:clients:compileJavawarning: [options] bootstrap class path 
not set in conjunction with -source 1.7
Note: 

 uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
1 warning

:kafka-trunk-jdk8:clients:processResources UP-TO-DATE
:kafka-trunk-jdk8:clients:classes
:kafka-trunk-jdk8:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk8:clients:createVersionFile
:kafka-trunk-jdk8:clients:jar
:kafka-trunk-jdk8:log4j-appender:compileJavawarning: [options] bootstrap class 
path not set in conjunction with -source 1.7
1 warning

:kafka-trunk-jdk8:log4j-appender:processResources UP-TO-DATE
:kafka-trunk-jdk8:log4j-appender:classes
:kafka-trunk-jdk8:log4j-appender:jar
:kafka-trunk-jdk8:core:compileJava UP-TO-DATE
:kafka-trunk-jdk8:core:compileScalaJava HotSpot(TM) 64-Bit Server VM warning: 
ignoring option MaxPermSize=512m; support was removed in 8.0

:78:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^
:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,

  ^
:37:
 value DEFAULT_TIME

[jira] [Commented] (KAFKA-1641) Log cleaner exits if last cleaned offset is lower than earliest offset

2015-10-27 Thread Denis Zhdanov (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14977512#comment-14977512
 ] 

Denis Zhdanov commented on KAFKA-1641:
--

We are hitting that bug again and again. Is it possible to apply it to current 
0.8.x release ?
I can create PR on Github if allowed, for example.

> Log cleaner exits if last cleaned offset is lower than earliest offset
> --
>
> Key: KAFKA-1641
> URL: https://issues.apache.org/jira/browse/KAFKA-1641
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.1.1
>Reporter: Joel Koshy
>Assignee: Guozhang Wang
> Fix For: 0.9.0.0
>
> Attachments: KAFKA-1641.patch, KAFKA-1641_2014-10-09_13:04:15.patch
>
>
> Encountered this recently: the log cleaner exited a while ago (I think 
> because the topic had compressed messages). That issue was subsequently 
> addressed by having the producer only send uncompressed. However, on a 
> subsequent restart of the broker we see this:
> In this scenario I think it is reasonable to just emit a warning and have the 
> cleaner round up its first dirty offset to the base offset of the first 
> segment.
> {code}
> [kafka-server] [] [kafka-log-cleaner-thread-0], Error due to 
> java.lang.IllegalArgumentException: requirement failed: Last clean offset is 
> 54770438 but segment base offset is 382844024 for log testtopic-0.
> at scala.Predef$.require(Predef.scala:145)
> at kafka.log.Cleaner.buildOffsetMap(LogCleaner.scala:491)
> at kafka.log.Cleaner.clean(LogCleaner.scala:288)
> at 
> kafka.log.LogCleaner$CleanerThread.cleanOrSleep(LogCleaner.scala:202)
> at kafka.log.LogCleaner$CleanerThread.doWork(LogCleaner.scala:187)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:51)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: Expose ReplicaManager gauges

2015-10-27 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/364


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk8 #63

2015-10-27 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] HOTFIX: group rebalance can throw illegal generation or rebalance in

[wangguoz] HOTFIX: fix off-by-one stream offset commit

[wangguoz] HOTFIX: call consumer.poll() even when no task is assigned

[wangguoz] KAFKA-1888: rolling upgrade test

[wangguoz] KAFKA-2677: ensure consumer sees coordinator disconnects

[wangguoz] HOTFIX: correct sourceNodes for kstream.through()

--
[...truncated 4673 lines...]

org.apache.kafka.copycat.json.JsonConverterTest > nullSchemaAndMapToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > stringToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > timestampToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > 
testCopycatSchemaMetadataTranslation PASSED

org.apache.kafka.copycat.json.JsonConverterTest > timestampToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > decimalToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > mapToCopycatStringKeys PASSED

org.apache.kafka.copycat.json.JsonConverterTest > mapToJsonNonStringKeys PASSED

org.apache.kafka.copycat.json.JsonConverterTest > longToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > mismatchSchemaJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > 
testCacheSchemaToCopycatConversion PASSED

org.apache.kafka.copycat.json.JsonConverterTest > 
testJsonSchemaMetadataTranslation PASSED

org.apache.kafka.copycat.json.JsonConverterTest > bytesToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > shortToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > intToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > structToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > stringToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > nullSchemaAndArrayToJson 
PASSED

org.apache.kafka.copycat.json.JsonConverterTest > byteToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > nullSchemaPrimitiveToCopycat 
PASSED

org.apache.kafka.copycat.json.JsonConverterTest > byteToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > intToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > dateToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > noSchemaToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > noSchemaToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > nullSchemaAndPrimitiveToJson 
PASSED

org.apache.kafka.copycat.json.JsonConverterTest > mapToJsonStringKeys PASSED

org.apache.kafka.copycat.json.JsonConverterTest > arrayToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > nullToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > timeToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > structToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > shortToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > dateToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > doubleToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > timeToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > floatToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > decimalToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > arrayToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > booleanToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > mapToCopycatNonStringKeys 
PASSED

org.apache.kafka.copycat.json.JsonConverterTest > bytesToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > doubleToCopycat PASSED
:copycat:runtime:checkstyleMain
:copycat:runtime:compileTestJavawarning: [options] bootstrap class path not set 
in conjunction with -source 1.7
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
1 warning

:copycat:runtime:processTestResources
:copycat:runtime:testClasses
:copycat:runtime:checkstyleTest
:copycat:runtime:test

org.apache.kafka.copycat.runtime.WorkerSinkTaskTest > testDeliverConvertsData 
PASSED

org.apache.kafka.copycat.runtime.WorkerSinkTaskTest > testCommit PASSED

org.apache.kafka.copycat.runtime.WorkerSinkTaskTest > 
testCommitTaskFlushFailure PASSED

org.apache.kafka.copycat.runtime.WorkerSinkTaskTest > testCommitConsumerFailure 
PASSED

org.apache.kafka.copycat.runtime.WorkerSinkTaskTest > testCommitTimeout PASSED

org.apache.kafka.copycat.runtime.WorkerSinkTaskTest > testAssignmentPauseResume 
PASSED

org.apache.kafka.copycat.runtime.WorkerSinkTaskTest > testPollsInBackground 
PASSED

org.apache.kafka.copycat.runtime.WorkerSourceTaskTest > testCommit PASSED

org.apache.kafka.copycat.runtime.WorkerSourceTaskTest > testPollsInBackground 
PASSED

org.apache.kafka.copycat.runtime.WorkerSourceTaskTest > testCommitFailure PASSED

org.apache.kafka.copycat.runtime.WorkerSourceTaskTest > 
t

[jira] [Commented] (KAFKA-2683) Ensure wakeup exceptions are propagated to user in new consumer

2015-10-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14977465#comment-14977465
 ] 

ASF GitHub Bot commented on KAFKA-2683:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/366


> Ensure wakeup exceptions are propagated to user in new consumer
> ---
>
> Key: KAFKA-2683
> URL: https://issues.apache.org/jira/browse/KAFKA-2683
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.9.0.0
>
>
> KafkaConsumer.wakeup() can be used to interrupt blocking operations (e.g. in 
> order to shutdown), so wakeup exceptions must get propagated to the user. 
> Currently, there are several locations in the code where a wakeup exception 
> could be caught and silently discarded. For example, when the rebalance 
> callback is invoked, we just catch and log all exceptions. In this case, we 
> also need to be careful that wakeup exceptions do not affect rebalance 
> callback semantics. In particular, it is possible currently for a wakeup to 
> cause onPartitionsRevoked to be invoked multiple times.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2683) Ensure wakeup exceptions are propagated to user in new consumer

2015-10-27 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-2683.
--
   Resolution: Fixed
Fix Version/s: 0.9.0.0

Issue resolved by pull request 366
[https://github.com/apache/kafka/pull/366]

> Ensure wakeup exceptions are propagated to user in new consumer
> ---
>
> Key: KAFKA-2683
> URL: https://issues.apache.org/jira/browse/KAFKA-2683
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.9.0.0
>
>
> KafkaConsumer.wakeup() can be used to interrupt blocking operations (e.g. in 
> order to shutdown), so wakeup exceptions must get propagated to the user. 
> Currently, there are several locations in the code where a wakeup exception 
> could be caught and silently discarded. For example, when the rebalance 
> callback is invoked, we just catch and log all exceptions. In this case, we 
> also need to be careful that wakeup exceptions do not affect rebalance 
> callback semantics. In particular, it is possible currently for a wakeup to 
> cause onPartitionsRevoked to be invoked multiple times.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2683: ensure wakeup exceptions raised to...

2015-10-27 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/366


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2674) ConsumerRebalanceListener.onPartitionsRevoked() is not called on consumer close

2015-10-27 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14977457#comment-14977457
 ] 

Jiangjie Qin commented on KAFKA-2674:
-

I don't have a strong opinion on this. I agree with [~guozhang] that the 
argument name is a little bit weird. If we change the function name perhaps we 
can use currentAssignment instead of oldAssignment?

> ConsumerRebalanceListener.onPartitionsRevoked() is not called on consumer 
> close
> ---
>
> Key: KAFKA-2674
> URL: https://issues.apache.org/jira/browse/KAFKA-2674
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.0
>Reporter: Michal Turek
>Assignee: Jason Gustafson
>
> Hi, I'm investigating and testing behavior of new consumer from the planned 
> release 0.9 and found an inconsistency in calling of rebalance callbacks.
> I noticed that ConsumerRebalanceListener.onPartitionsRevoked() is NOT called 
> during consumer close and application shutdown. It's JavaDoc contract says:
> - "This method will be called before a rebalance operation starts and after 
> the consumer stops fetching data."
> - "It is recommended that offsets should be committed in this callback to 
> either Kafka or a custom offset store to prevent duplicate data."
> I believe calling consumer.close() is a start of rebalance operation and even 
> the local consumer that is actually closing should be notified to be able to 
> process any rebalance logic including offsets commit (e.g. if auto-commit is 
> disabled).
> There are commented logs of current and expected behaviors.
> {noformat}
> // Application start
> 2015-10-20 15:14:02.208 INFO  o.a.k.common.utils.AppInfoParser
> [TestConsumer-worker-0]: Kafka version : 0.9.0.0-SNAPSHOT 
> (AppInfoParser.java:82)
> 2015-10-20 15:14:02.208 INFO  o.a.k.common.utils.AppInfoParser
> [TestConsumer-worker-0]: Kafka commitId : 241b9ab58dcbde0c 
> (AppInfoParser.java:83)
> // Consumer started (the first one in group), rebalance callbacks are called 
> including empty onPartitionsRevoked()
> 2015-10-20 15:14:02.333 INFO  c.a.e.kafka.newapi.TestConsumer 
> [TestConsumer-worker-0]: Rebalance callback, revoked: [] 
> (TestConsumer.java:95)
> 2015-10-20 15:14:02.343 INFO  c.a.e.kafka.newapi.TestConsumer 
> [TestConsumer-worker-0]: Rebalance callback, assigned: [testB-1, testA-0, 
> testB-0, testB-3, testA-2, testB-2, testA-1, testA-4, testB-4, testA-3] 
> (TestConsumer.java:100)
> // Another consumer joined the group, rebalancing
> 2015-10-20 15:14:17.345 INFO  o.a.k.c.c.internals.Coordinator 
> [TestConsumer-worker-0]: Attempt to heart beat failed since the group is 
> rebalancing, try to re-join group. (Coordinator.java:714)
> 2015-10-20 15:14:17.346 INFO  c.a.e.kafka.newapi.TestConsumer 
> [TestConsumer-worker-0]: Rebalance callback, revoked: [testB-1, testA-0, 
> testB-0, testB-3, testA-2, testB-2, testA-1, testA-4, testB-4, testA-3] 
> (TestConsumer.java:95)
> 2015-10-20 15:14:17.349 INFO  c.a.e.kafka.newapi.TestConsumer 
> [TestConsumer-worker-0]: Rebalance callback, assigned: [testB-3, testA-4, 
> testB-4, testA-3] (TestConsumer.java:100)
> // Consumer started closing, there SHOULD be onPartitionsRevoked() callback 
> to commit offsets like during standard rebalance, but it is missing
> 2015-10-20 15:14:39.280 INFO  c.a.e.kafka.newapi.TestConsumer [main]: 
> Closing instance (TestConsumer.java:42)
> 2015-10-20 15:14:40.264 INFO  c.a.e.kafka.newapi.TestConsumer 
> [TestConsumer-worker-0]: Worker thread stopped (TestConsumer.java:89)
> {noformat}
> Workaround is to call onPartitionsRevoked() explicitly and manually just 
> before calling consumer.close() but it seems dirty and error prone for me. It 
> can be simply forgotten be someone without such experience.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-36 - Rack aware replica assignment

2015-10-27 Thread Allen Wang
During the discussion in the hangout, it was mentioned that it would be
desirable that consumers know the rack information of the brokers so that
they can consume from the broker in the same rack to reduce latency. As I
understand this will only be beneficial if consumer can consume from any
broker in ISR, which is not possible now.

I suggest we skip the change to TMR. Once the change is made to consumer to
be able to consume from any broker in ISR, the rack information can be
added to TMR.

Another thing I want to confirm is  command line behavior. I think the
desirable default behavior is to fail fast on command line for incomplete
rack mapping. The error message can include further instruction that tells
the user to add an extra argument (like "--allow-partial-rackinfo") to
suppress the error and do an imperfect rack aware assignment. If the
default behavior is to allow incomplete mapping, the error can still be
easily missed.

The affected command line tools are TopicCommand and
ReassignPartitionsCommand.

Thanks,
Allen





On Mon, Oct 26, 2015 at 12:55 PM, Aditya Auradkar 
wrote:

> Hi Allen,
>
> For TopicMetadataResponse to understand version, you can bump up the
> request version itself. Based on the version of the request, the response
> can be appropriately serialized. It shouldn't be a huge change. For
> example: We went through something similar for ProduceRequest recently (
> https://reviews.apache.org/r/33378/)
> I guess the reason protocol information is not included in the TMR is
> because the topic itself is independent of any particular protocol (SSL vs
> Plaintext). Having said that, I'm not sure we even need rack information in
> TMR. What usecase were you thinking of initially?
>
> For 1 - I'd be fine with adding an option to the command line tools that
> check rack assignment. For e.g. "--strict-assignment" or something similar.
>
> Aditya
>
> On Thu, Oct 22, 2015 at 6:44 PM, Allen Wang  wrote:
>
> > For 2 and 3, I have updated the KIP. Please take a look. One thing I have
> > changed is removing the proposal to add rack to TopicMetadataResponse.
> The
> > reason is that unlike UpdateMetadataRequest, TopicMetadataResponse does
> not
> > understand version. I don't see a way to include rack without breaking
> old
> > version of clients. That's probably why secure protocol is not included
> in
> > the TopicMetadataResponse either. I think it will be a much bigger change
> > to include rack in TopicMetadataResponse.
> >
> > For 1, my concern is that doing rack aware assignment without complete
> > broker to rack mapping will result in assignment that is not rack aware
> and
> > fail to provide fault tolerance in the event of rack outage. This kind of
> > problem will be difficult to surface. And the cost of this problem is
> high:
> > you have to do partition reassignment if you are lucky to spot the
> problem
> > early on or face the consequence of data loss during real rack outage.
> >
> > I do see the concern of fail-fast as it might also cause data loss if
> > producer is not able produce the message due to topic creation failure.
> Is
> > it feasible to treat dynamic topic creation and command tools
> differently?
> > We allow dynamic topic creation with incomplete broker-rack mapping and
> > fail fast in command line. Another option is to let user determine the
> > behavior for command line. For example, by default fail fast in command
> > line but allow incomplete broker-rack mapping if another switch is
> > provided.
> >
> >
> >
> >
> > On Tue, Oct 20, 2015 at 10:05 AM, Aditya Auradkar <
> > aaurad...@linkedin.com.invalid> wrote:
> >
> > > Hey Allen,
> > >
> > > 1. If we choose fail fast topic creation, we will have topic creation
> > > failures while upgrading the cluster. I really doubt we want this
> > behavior.
> > > Ideally, this should be invisible to clients of a cluster. Currently,
> > each
> > > broker is effectively its own rack. So we probably can use the rack
> > > information whenever possible but not make it a hard requirement. To
> > extend
> > > Gwen's example, one badly configured broker should not degrade topic
> > > creation for the entire cluster.
> > >
> > > 2. Upgrade scenario - Can you add a section on the upgrade piece to
> > confirm
> > > that old clients will not see errors? I believe
> > ZookeeperConsumerConnector
> > > reads the Broker objects from ZK. I wanted to confirm that this will
> not
> > > cause any problems.
> > >
> > > 3. Could you elaborate your proposed changes to the
> UpdateMetadataRequest
> > > in the "Public Interfaces" section? Personally, I find this format easy
> > to
> > > read in terms of wire protocol changes:
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-4+-+Command+line+and+centralized+administrative+operations#KIP-4-Commandlineandcentralizedadministrativeoperations-CreateTopicRequest
> > >
> > > Aditya
> > >
> > > On Fri, Oct 16, 2015 at 3:45 PM, Allen Wang 
> > wrote:
> > >
> > > > KIP is updated in

Build failed in Jenkins: kafka-trunk-jdk7 #725

2015-10-27 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] HOTFIX: group rebalance can throw illegal generation or rebalance in

[wangguoz] HOTFIX: fix off-by-one stream offset commit

[wangguoz] HOTFIX: call consumer.poll() even when no task is assigned

[wangguoz] KAFKA-1888: rolling upgrade test

[wangguoz] KAFKA-2677: ensure consumer sees coordinator disconnects

[wangguoz] HOTFIX: correct sourceNodes for kstream.through()

--
[...truncated 1073 lines...]
kafka.integration.PlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopicWithCollision 
PASSED

kafka.integration.PlaintextTopicMetadataTest > testAliveBrokerListWithNoTopics 
PASSED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.PlaintextTopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.PlaintextTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.PlaintextTopicMetadataTest > testTopicMetadataRequest PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize PASSED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests PASSED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch PASSED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression PASSED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic PASSED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest PASSED

kafka.integration.MinIsrConfigTest > testDefaultKafkaConfig PASSED

kafka.integration.RollingBounceTest > testRollingBounce PASSED

kafka.integration.SaslSslTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.SaslSslTopicMetadataTest > testAutoCreateTopicWithCollision 
PASSED

kafka.integration.SaslSslTopicMetadataTest > testAliveBrokerListWithNoTopics 
PASSED

kafka.integration.SaslSslTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SaslSslTopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.SaslSslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.SaslSslTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.SaslSslTopicMetadataTest > testTopicMetadataRequest PASSED

kafka.integration.SaslSslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.SslTopicMetadataTest > testIsrAfterBrokerShutDownAndJoinsBack 
PASSED

kafka.integration.SslTopicMetadataTest > testAutoCreateTopicWithCollision PASSED

kafka.integration.SslTopicMetadataTest > testAliveBrokerListWithNoTopics PASSED

kafka.integration.SslTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SslTopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.SslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.SslTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.SslTopicMetadataTest > testTopicMetadataRequest PASSED

kafka.integration.SslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.consumer.PartitionAssignorTest > testRoundRobinPartitionAssignor PASSED

kafka.consumer.PartitionAssignorTest > testRangePartitionAssignor PASSED

kafka.consumer.MetricsTest > testMetricsReporterAfterDeletingTopic PASSED

kafka.consumer.MetricsTest > testMetricsLeak PASSED

kafka.consumer.ZookeeperConsumerConnectorTest > testBasic PASSED

kafka.consumer.ZookeeperConsumerConnectorTest > testCompressionSetConsumption 
PASSED

kafka.consumer.ZookeeperConsumerConnectorTest > testLeaderSelectionForPartition 
PASSED

kafka.consumer.ZookeeperConsumerConnectorTest > testConsumerDecoder PASSED

kafka.consumer.ZookeeperConsumerConnectorTest > testConsumerRebalanceListener 
PASSED

kafka.consumer.ZookeeperConsumerConnectorTest > testCompression PASSED

kafka.consumer.ConsumerIteratorTest > 
testConsumerIteratorDeduplicationDeepIterator PASSED

kafka.consumer.ConsumerIteratorTest > testConsumerIteratorDecodingFailure PASSED

kafka.consumer.TopicFilterTest > testWhitelists PASSED

kafka.consumer.TopicFilterTest > 
testWildcardTopicCountGetTopicCountMapEscapeJson PASSED

kafka.consumer.TopicFilterTest > testBlacklists PASSED

kafka.server.ServerShutdownTest > testCleanShutdownAfterFailedStartup PASSED

kafka.server.ServerShutdownTest > testConsecutiveShutdown PASSED

kafka.server.ServerShutdownTest > testCleanShutdown PASSED

kafka.server.ServerShutdownTest > testCleanShutdownWithDelete

[jira] [Commented] (KAFKA-2698) add paused API

2015-10-27 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14977442#comment-14977442
 ] 

Jiangjie Qin commented on KAFKA-2698:
-

Makes sense to me. We probably need it.

> add paused API
> --
>
> Key: KAFKA-2698
> URL: https://issues.apache.org/jira/browse/KAFKA-2698
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Onur Karaman
> Fix For: 0.9.0.0
>
>
> org.apache.kafka.clients.consumer.Consumer tends to follow a pattern of 
> having an action API paired with a query API:
> subscribe() has subscription()
> assign() has assignment()
> There's no analogous API for pause.
> Should there be a paused() API returning Set?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2698) add paused API

2015-10-27 Thread Onur Karaman (JIRA)
Onur Karaman created KAFKA-2698:
---

 Summary: add paused API
 Key: KAFKA-2698
 URL: https://issues.apache.org/jira/browse/KAFKA-2698
 Project: Kafka
  Issue Type: Sub-task
Reporter: Onur Karaman


org.apache.kafka.clients.consumer.Consumer tends to follow a pattern of having 
an action API paired with a query API:
subscribe() has subscription()
assign() has assignment()

There's no analogous API for pause.

Should there be a paused() API returning Set?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: HOTFIX: correct sourceNodes for kstream.throug...

2015-10-27 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/374


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-2677) Coordinator disconnects not propagated to new consumer

2015-10-27 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-2677.
--
   Resolution: Fixed
Fix Version/s: 0.9.0.0

Issue resolved by pull request 349
[https://github.com/apache/kafka/pull/349]

> Coordinator disconnects not propagated to new consumer
> --
>
> Key: KAFKA-2677
> URL: https://issues.apache.org/jira/browse/KAFKA-2677
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.9.0.0
>
>
> Currently, disconnects by the coordinator are not always seen by the 
> consumer. This can result in a long delay after the old coordinator has 
> shutdown or failed before the consumer knows that it needs to find the new 
> coordinator. The NetworkClient makes socket disconnects available to users in 
> two ways:
> 1. through a flag in the ClientResponse object for requests pending when the 
> disconnect occurred, and 
> 2. through the connectionFailed() method. 
> The first method clearly cannot be depended on since it only helps when a 
> request is pending, which is relatively rare for the connection with the 
> coordinator. Instead, we can probably use the second method with a little 
> rework of ConsumerNetworkClient to check for failed connections immediately 
> after returning from poll(). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2677) Coordinator disconnects not propagated to new consumer

2015-10-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14977348#comment-14977348
 ] 

ASF GitHub Bot commented on KAFKA-2677:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/349


> Coordinator disconnects not propagated to new consumer
> --
>
> Key: KAFKA-2677
> URL: https://issues.apache.org/jira/browse/KAFKA-2677
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.9.0.0
>
>
> Currently, disconnects by the coordinator are not always seen by the 
> consumer. This can result in a long delay after the old coordinator has 
> shutdown or failed before the consumer knows that it needs to find the new 
> coordinator. The NetworkClient makes socket disconnects available to users in 
> two ways:
> 1. through a flag in the ClientResponse object for requests pending when the 
> disconnect occurred, and 
> 2. through the connectionFailed() method. 
> The first method clearly cannot be depended on since it only helps when a 
> request is pending, which is relatively rare for the connection with the 
> coordinator. Instead, we can probably use the second method with a little 
> rework of ConsumerNetworkClient to check for failed connections immediately 
> after returning from poll(). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2677: ensure consumer sees coordinator d...

2015-10-27 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/349


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: HOTFIX: correct sourceNodes for kstream.throug...

2015-10-27 Thread ymatsuda
GitHub user ymatsuda opened a pull request:

https://github.com/apache/kafka/pull/374

HOTFIX: correct sourceNodes for kstream.through()

@guozhangwang 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ymatsuda/kafka fix_through_operator

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/374.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #374


commit 3a18a5cd7c2a69e95be35d606c2642c47b28d13d
Author: Yasuhiro Matsuda 
Date:   2015-10-27T23:07:35Z

HOTFIX: correct sourceNodes for kstream.through()




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-1888) Add a "rolling upgrade" system test

2015-10-27 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-1888.
--
Resolution: Fixed

Issue resolved by pull request 229
[https://github.com/apache/kafka/pull/229]

> Add a "rolling upgrade" system test
> ---
>
> Key: KAFKA-1888
> URL: https://issues.apache.org/jira/browse/KAFKA-1888
> Project: Kafka
>  Issue Type: Improvement
>  Components: system tests
>Reporter: Gwen Shapira
>Assignee: Geoff Anderson
> Fix For: 0.9.0.0
>
> Attachments: KAFKA-1888_2015-03-23_11:54:25.patch
>
>
> To help test upgrades and compatibility between versions, it will be cool to 
> add a rolling-upgrade test to system tests:
> Given two versions (just a path to the jars?), check that you can do a
> rolling upgrade of the brokers from one version to another (using clients 
> from the old version) without losing data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1888) Add a "rolling upgrade" system test

2015-10-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14977270#comment-14977270
 ] 

ASF GitHub Bot commented on KAFKA-1888:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/229


> Add a "rolling upgrade" system test
> ---
>
> Key: KAFKA-1888
> URL: https://issues.apache.org/jira/browse/KAFKA-1888
> Project: Kafka
>  Issue Type: Improvement
>  Components: system tests
>Reporter: Gwen Shapira
>Assignee: Geoff Anderson
> Fix For: 0.9.0.0
>
> Attachments: KAFKA-1888_2015-03-23_11:54:25.patch
>
>
> To help test upgrades and compatibility between versions, it will be cool to 
> add a rolling-upgrade test to system tests:
> Given two versions (just a path to the jars?), check that you can do a
> rolling upgrade of the brokers from one version to another (using clients 
> from the old version) without losing data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-1888: rolling upgrade test

2015-10-27 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/229


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2644) Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL

2015-10-27 Thread Rajini Sivaram (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14977233#comment-14977233
 ] 

Rajini Sivaram commented on KAFKA-2644:
---

[~ijuma] Thank you, yes, that error was indeed occurring because the principal 
was not added. I was running the tests locally with a newer version of ducktape 
from the one being used in the Confluent Jenkins build. I have updated the 
tests to work with the older version of ducktape and now most of the tests are 
passing. Replication tests alone are failing and I am waiting for Geoff's patch 
before rerunning the tests.

> Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL
> 
>
> Key: KAFKA-2644
> URL: https://issues.apache.org/jira/browse/KAFKA-2644
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Rajini Sivaram
>Priority: Critical
> Fix For: 0.9.0.0
>
>
> We need to define which of the existing ducktape tests are relevant. cc 
> [~rsivaram]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2675) SASL/Kerberos follow-up

2015-10-27 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2675:
---
Description: 
This is a follow-up to KAFKA-1686. 

1. Decide on `serviceName` configuration: do we want to keep it in two places?
2. auth.to.local config name is a bit opaque, is there a better one?
3. Implement or remove SASL_KAFKA_SERVER_REALM config
4. Consider making Login's thread a daemon thread
5. Write test that shows authentication failure due to principal in JAAS file 
not being present in MiniKDC

  was:
This is a follow-up to KAFKA-1686. 

1. Decide on `serviceName` configuration: do we want to keep it in two places?
2. auth.to.local config name is a bit opaque, is there a better one?
3. Implement or remove SASL_KAFKA_SERVER_REALM config
4. Consider making Login's thread a daemon thread
5. Write test that shows authentication failure due to invalid user
6. Write test that shows authentication failure due to wrong password
7. Write test that shows authentication failure due ticket expiring


> SASL/Kerberos follow-up
> ---
>
> Key: KAFKA-2675
> URL: https://issues.apache.org/jira/browse/KAFKA-2675
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.9.0.0
>
>
> This is a follow-up to KAFKA-1686. 
> 1. Decide on `serviceName` configuration: do we want to keep it in two places?
> 2. auth.to.local config name is a bit opaque, is there a better one?
> 3. Implement or remove SASL_KAFKA_SERVER_REALM config
> 4. Consider making Login's thread a daemon thread
> 5. Write test that shows authentication failure due to principal in JAAS file 
> not being present in MiniKDC



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2644) Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL

2015-10-27 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14977215#comment-14977215
 ] 

Ismael Juma commented on KAFKA-2644:


[~rsivaram], that error typically happens when a principal cannot be found in 
the JAAS config file. I have a branch where the error is a bit more helpful.

> Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL
> 
>
> Key: KAFKA-2644
> URL: https://issues.apache.org/jira/browse/KAFKA-2644
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Rajini Sivaram
>Priority: Critical
> Fix For: 0.9.0.0
>
>
> We need to define which of the existing ducktape tests are relevant. cc 
> [~rsivaram]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2690) Protect passwords from logging

2015-10-27 Thread Jakub Nowak (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakub Nowak updated KAFKA-2690:
---
Status: In Progress  (was: Patch Available)

> Protect passwords from logging
> --
>
> Key: KAFKA-2690
> URL: https://issues.apache.org/jira/browse/KAFKA-2690
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Jakub Nowak
> Fix For: 0.9.0.0
>
>
> We currently store the key (ssl.key.password), keystore 
> (ssl.keystore.password) and truststore (ssl.truststore.password) passwords as 
> a String in `KafkaConfig`, `ConsumerConfig` and `ProducerConfig`.
> The problem with this approach is that we may accidentally log the password 
> when logging the config.
> A possible solution is to introduce a new `ConfigDef.Type` that overrides 
> `toString` so that the value is hidden.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: HOTFIX: call consumer.poll() even when no task...

2015-10-27 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/373


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2648) Coordinator should not allow empty groupIds

2015-10-27 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14977130#comment-14977130
 ] 

Guozhang Wang commented on KAFKA-2648:
--

We can just check non-empty group-id in join-group / sync-group requests, but 
not in offset commit / fetch requests.

> Coordinator should not allow empty groupIds
> ---
>
> Key: KAFKA-2648
> URL: https://issues.apache.org/jira/browse/KAFKA-2648
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>
> The coordinator currently allows consumer groups with empty groupIds, but 
> there probably aren't any cases where this is actually a good idea and it 
> tends to mask problems where different groups have simply not configured a 
> groupId. To address this, we can add a new error code, say INVALID_GROUP_ID, 
> which the coordinator can return when it encounters an  empty groupID. We 
> should also make groupId a required property in consumer configuration and 
> enforce that it is non-empty. 
> It's a little unclear whether this change would have compatibility concerns. 
> The old consumer will fail with an empty groupId (because it cannot create 
> the zookeeper paths), but other clients may allow it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: HOTFIX: call consumer.poll() even when no task...

2015-10-27 Thread ymatsuda
GitHub user ymatsuda opened a pull request:

https://github.com/apache/kafka/pull/373

HOTFIX: call consumer.poll() even when no task is assigned

StreamThread should keep calling consumer.poll() even when no task is 
assigned. This is necessary to get a task.

@guozhangwang 


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ymatsuda/kafka no_task

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/373.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #373


commit 0e9acf0a1d037e3ee0d4916cd515d09c47b77657
Author: Yasuhiro Matsuda 
Date:   2015-10-27T20:45:29Z

HOTFIX: call consumer.poll() even when no task is assigned




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: HOTFIX: fix off-by-one stream offset commit

2015-10-27 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/372


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: HOTFIX: group rebalance can throw illegal gene...

2015-10-27 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/370


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: HOTFIX: fix off-by-one stream offset commit

2015-10-27 Thread ymatsuda
GitHub user ymatsuda opened a pull request:

https://github.com/apache/kafka/pull/372

HOTFIX: fix off-by-one stream offset commit

@guozhangwang 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ymatsuda/kafka commit_offset

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/372.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #372


commit dea0ad09d9d977d0d0d30ecc12d2b305daf6d55d
Author: Yasuhiro Matsuda 
Date:   2015-10-27T20:18:54Z

HOTFIX: fix off-by-one stream offset commit




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2644) Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL

2015-10-27 Thread Rajini Sivaram (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14977076#comment-14977076
 ] 

Rajini Sivaram commented on KAFKA-2644:
---

[~geoffra] Thank you, I can wait. Feel free to cancel the current test run if 
you need. I can merge and rerun the tests tomorrow morning.

> Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL
> 
>
> Key: KAFKA-2644
> URL: https://issues.apache.org/jira/browse/KAFKA-2644
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Rajini Sivaram
>Priority: Critical
> Fix For: 0.9.0.0
>
>
> We need to define which of the existing ducktape tests are relevant. cc 
> [~rsivaram]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2644) Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL

2015-10-27 Thread Geoff Anderson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14977049#comment-14977049
 ] 

Geoff Anderson commented on KAFKA-2644:
---

[~rsivaram] It looks like your current test run 
(http://jenkins.confluent.io/job/kafka_system_tests_branch_builder/131/console) 
has some failures because some tools were recently moved from 
org.apache.kafka.client.tools package to org.apache.kafka.tools

My patch includes the fix for this among other things 
(https://github.com/apache/kafka/pull/229) and will go through shortly if you 
don't mind waiting 


> Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL
> 
>
> Key: KAFKA-2644
> URL: https://issues.apache.org/jira/browse/KAFKA-2644
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Rajini Sivaram
>Priority: Critical
> Fix For: 0.9.0.0
>
>
> We need to define which of the existing ducktape tests are relevant. cc 
> [~rsivaram]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #62

2015-10-27 Thread Apache Jenkins Server
See 

Changes:

[cshapi] MINOR: Fix compiler error in `KafkaLog4jAppender`

--
[...truncated 5210 lines...]

kafka.coordinator.MemberMetadataTest > testVoteRaisesOnNoSupportedProtocols 
PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[0] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.log.LogManagerTest > testCleanupSegmentsToMaintainSize PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithRelativeDirectory 
PASSED

kafka.log.LogManagerTest > testGetNonExistentLog PASSED

kafka.log.LogManagerTest > testTwoLogManagersUsingSameDirFails PASSED

kafka.log.LogManagerTest > testLeastLoadedAssignment PASSED

kafka.log.LogManagerTest > testCleanupExpiredSegments PASSED

kafka.log.LogManagerTest > testCheckpointRecoveryPoints PASSED

kafka.log.LogManagerTest > testTimeBasedFlush PASSED

kafka.log.LogManagerTest > testCreateLog PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithTrailingSlash PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.FileMessageSetTest > testTruncate PASSED

kafka.log.FileMessageSetTest > testIterationOverPartialAndTruncation PASSED

kafka.log.FileMessageSetTest > testRead PASSED

kafka.log.FileMessageSetTest > testFileSize PASSED

kafka.log.FileMessageSetTest > testIteratorWithLimits PASSED

kafka.log.FileMessageSetTest > testPreallocateTrue PASSED

kafka.log.FileMessageSetTest > testIteratorIsConsistent PASSED

kafka.log.FileMessageSetTest > testIterationDoesntChangePosition PASSED

kafka.log.FileMessageSetTest > testWrittenEqualsRead PASSED

kafka.log.FileMessageSetTest > testWriteTo PASSED

kafka.log.FileMessageSetTest > testPreallocateFalse PASSED

kafka.log.FileMessageSetTest > testPreallocateClearShutdown PASSED

kafka.log.FileMessageSetTest > testSearch PASSED

kafka.log.FileMessageSetTest > testSizeInBytes PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingTopic PASSED

kafka.log.LogTest > testIndexRebuild PASSED

kafka.log.LogTest > testLogRolls PASSED

kafka.log.LogTest > testMessageSizeCheck PASSED

kafka.log.LogTest > testAsyncDelete PASSED

kafka.log.LogTest > testReadOutOfRange PASSED

kafka.log.LogTest > testReadAtLogGap PASSED

kafka.log.LogTest > testTimeBasedLogRoll PASSED

kafka.log.LogTest > testLoadEmptyLog PASSED

kafka.log.LogTest > testMessageSetSizeCheck PASSED

kafka.log.LogTest > testIndexResizingAtTruncation PASSED

kafka.log.LogTest > testCompactedTopicConstraints PASSED

kafka.log.LogTest > testThatGarbageCollectingSegmentsDoesntChangeOffset 

[jira] [Updated] (KAFKA-2690) Protect passwords from logging

2015-10-27 Thread Jakub Nowak (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakub Nowak updated KAFKA-2690:
---
Status: Patch Available  (was: Open)

> Protect passwords from logging
> --
>
> Key: KAFKA-2690
> URL: https://issues.apache.org/jira/browse/KAFKA-2690
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Jakub Nowak
> Fix For: 0.9.0.0
>
>
> We currently store the key (ssl.key.password), keystore 
> (ssl.keystore.password) and truststore (ssl.truststore.password) passwords as 
> a String in `KafkaConfig`, `ConsumerConfig` and `ProducerConfig`.
> The problem with this approach is that we may accidentally log the password 
> when logging the config.
> A possible solution is to introduce a new `ConfigDef.Type` that overrides 
> `toString` so that the value is hidden.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: Hide passwords while logging the config.

2015-10-27 Thread Mszak
GitHub user Mszak opened a pull request:

https://github.com/apache/kafka/pull/371

Hide passwords while logging the config.

Added PASSWORD_STRING in ConfigDef that returns "[hidden]" when method 
toString is invoked.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Mszak/kafka ssl-password-protection

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/371.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #371


commit 6cb03e19937db1558f62d44cd53aaab2f583394b
Author: Jakub Nowak 
Date:   2015-10-27T19:18:55Z

Hide passwords while logging the config.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: HOTFIX: group rebalance can throw illegal gene...

2015-10-27 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/370

HOTFIX: group rebalance can throw illegal generation or rebalance in 
progress



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka hotfix-rebalance-error

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/370.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #370


commit 492f86e1b3196eecdb2a1d952a70216777280d68
Author: Jason Gustafson 
Date:   2015-10-27T17:44:46Z

HOTFIX: sync group can throw illegal generation or rebalance in progress




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Jenkins build is back to normal : kafka-trunk-jdk7 #724

2015-10-27 Thread Apache Jenkins Server
See 



[jira] [Commented] (KAFKA-2441) SSL/TLS in official docs

2015-10-27 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976914#comment-14976914
 ] 

Sriharsha Chintalapani commented on KAFKA-2441:
---

[~granthenke] I am working on it. Thanks.

> SSL/TLS in official docs
> 
>
> Key: KAFKA-2441
> URL: https://issues.apache.org/jira/browse/KAFKA-2441
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Sriharsha Chintalapani
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> We need to add a section in the official documentation regarding SSL/TLS:
> http://kafka.apache.org/documentation.html
> There is already a wiki page where some of the information is already present:
> https://cwiki.apache.org/confluence/display/KAFKA/Deploying+SSL+for+Kafka



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2441) SSL/TLS in official docs

2015-10-27 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976895#comment-14976895
 ] 

Grant Henke commented on KAFKA-2441:


[~sriharsha] Are you working on this? If not, I can try adapting your wiki page 
to the docs and post it for review.

> SSL/TLS in official docs
> 
>
> Key: KAFKA-2441
> URL: https://issues.apache.org/jira/browse/KAFKA-2441
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Sriharsha Chintalapani
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> We need to add a section in the official documentation regarding SSL/TLS:
> http://kafka.apache.org/documentation.html
> There is already a wiki page where some of the information is already present:
> https://cwiki.apache.org/confluence/display/KAFKA/Deploying+SSL+for+Kafka



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2697) add leave group logic to the consumer

2015-10-27 Thread Onur Karaman (JIRA)
Onur Karaman created KAFKA-2697:
---

 Summary: add leave group logic to the consumer
 Key: KAFKA-2697
 URL: https://issues.apache.org/jira/browse/KAFKA-2697
 Project: Kafka
  Issue Type: Sub-task
Reporter: Onur Karaman


KAFKA-2397 added logic on the coordinator to handle LeaveGroupRequests. We need 
to add logic to KafkaConsumer to send out a LeaveGroupRequest on close.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2663) Add quota-delay time to request processing time break-up

2015-10-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976747#comment-14976747
 ] 

ASF GitHub Bot commented on KAFKA-2663:
---

GitHub user auradkar opened a pull request:

https://github.com/apache/kafka/pull/369

KAFKA-2663, KAFKA-2664 - [Minor] Bugfixes

This has 2 fixes:
KAFKA-2664 - This patch changes the underlying map implementation of 
Metrics.java to a ConcurrentHashMap. Using a CopyOnWriteMap caused new metrics 
creation to get extremely slow when the existing corpus of metrics is large. 
Using a ConcurrentHashMap seems to speed up metric creation time significantly

KAFKA-2663 - Splitting out the throttleTime from the remote time. On 
throttled requests, the remote time went up artificially.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/auradkar/kafka K-2664

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/369.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #369


commit 2dc50c39bb9ea2c29d4d7663cacc145bf4bcd758
Author: Aditya Auradkar 
Date:   2015-10-27T16:29:29Z

Fix for KAFKA-2664, KAFKA-2663

commit f3abc741312a33fc2aba011fbc179519749af439
Author: Aditya Auradkar 
Date:   2015-10-27T17:06:47Z

revert gradle changes




> Add quota-delay time to request processing time break-up
> 
>
> Key: KAFKA-2663
> URL: https://issues.apache.org/jira/browse/KAFKA-2663
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joel Koshy
>Assignee: Aditya Auradkar
> Fix For: 0.9.0.1
>
>
> This is probably not critical for 0.9 but should be easy to fix:
> If a request is delayed due to quotas, I think the remote time will go up 
> artificially - or maybe response queue time (haven’t checked). We should add 
> a new quotaDelayTime to the request handling time break-up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2663, KAFKA-2664 - [Minor] Bugfixes

2015-10-27 Thread auradkar
GitHub user auradkar opened a pull request:

https://github.com/apache/kafka/pull/369

KAFKA-2663, KAFKA-2664 - [Minor] Bugfixes

This has 2 fixes:
KAFKA-2664 - This patch changes the underlying map implementation of 
Metrics.java to a ConcurrentHashMap. Using a CopyOnWriteMap caused new metrics 
creation to get extremely slow when the existing corpus of metrics is large. 
Using a ConcurrentHashMap seems to speed up metric creation time significantly

KAFKA-2663 - Splitting out the throttleTime from the remote time. On 
throttled requests, the remote time went up artificially.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/auradkar/kafka K-2664

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/369.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #369


commit 2dc50c39bb9ea2c29d4d7663cacc145bf4bcd758
Author: Aditya Auradkar 
Date:   2015-10-27T16:29:29Z

Fix for KAFKA-2664, KAFKA-2663

commit f3abc741312a33fc2aba011fbc179519749af439
Author: Aditya Auradkar 
Date:   2015-10-27T17:06:47Z

revert gradle changes




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: MINOR: Fix compiler error in `KafkaLog4jAppend...

2015-10-27 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/368


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Build failed in Jenkins: kafka-trunk-jdk8 #61

2015-10-27 Thread Ismael Juma
Looks like the branch for KAFKA-2447 wasn't rebased after the SSL
capitalisation fix went in. Here's a fix:

https://github.com/apache/kafka/pull/368

On Tue, Oct 27, 2015 at 4:26 PM, Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> See 
>
> Changes:
>
> [cshapi] KAFKA-2447: Add capability to KafkaLog4jAppender to be able to
> use SSL
>
> --
> Started by an SCM change
> [EnvInject] - Loading node environment variables.
> Building remotely on ubuntu-2 (docker Ubuntu ubuntu) in workspace <
> https://builds.apache.org/job/kafka-trunk-jdk8/ws/>
>  > git rev-parse --is-inside-work-tree # timeout=10
> Fetching changes from the remote Git repository
>  > git config remote.origin.url
> https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
> Fetching upstream changes from
> https://git-wip-us.apache.org/repos/asf/kafka.git
>  > git --version # timeout=10
>  > git -c core.askpass=true fetch --tags --progress
> https://git-wip-us.apache.org/repos/asf/kafka.git
> +refs/heads/*:refs/remotes/origin/*
>  > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
>  > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
> Checking out Revision d21cb66e7d21ed3d20fc1e13b9a856f764bb4237
> (refs/remotes/origin/trunk)
>  > git config core.sparsecheckout # timeout=10
>  > git checkout -f d21cb66e7d21ed3d20fc1e13b9a856f764bb4237
>  > git rev-list 2fd645ac2fec7cf089cb8175ee47823b67a07226 # timeout=10
> Setting
> GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
> Setting
> JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
> [kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson546249860918656535.sh
> +
> /home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1/bin/gradle
> To honour the JVM settings for this build a new JVM will be forked. Please
> consider using the daemon:
> http://gradle.org/docs/2.1/userguide/gradle_daemon.html.
> Building project 'core' with Scala version 2.10.5
> :downloadWrapper UP-TO-DATE
>
> BUILD SUCCESSFUL
>
> Total time: 21.349 secs
> Setting
> GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
> Setting
> JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
> [kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson8832205166714546292.sh
> + ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll
> docsJarAll testAll
> To honour the JVM settings for this build a new JVM will be forked. Please
> consider using the daemon:
> https://docs.gradle.org/2.8/userguide/gradle_daemon.html.
> Building project 'core' with Scala version 2.10.5
> :clean UP-TO-DATE
> :clients:clean
> :contrib:clean UP-TO-DATE
> :copycat:clean UP-TO-DATE
> :core:clean
> :examples:clean
> :log4j-appender:clean
> :streams:clean
> :tools:clean
> :contrib:hadoop-consumer:clean
> :contrib:hadoop-producer:clean
> :copycat:api:clean
> :copycat:file:clean
> :copycat:json:clean
> :copycat:runtime:clean
> :jar_core_2_10_5
> Building project 'core' with Scala version 2.10.5
> :kafka-trunk-jdk8:clients:compileJavawarning: [options] bootstrap class
> path not set in conjunction with -source 1.7
> Note: <
> https://builds.apache.org/job/kafka-trunk-jdk8/ws/clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java>
> uses or overrides a deprecated API.
> Note: Recompile with -Xlint:deprecation for details.
> Note: Some input files use unchecked or unsafe operations.
> Note: Recompile with -Xlint:unchecked for details.
> 1 warning
>
> :kafka-trunk-jdk8:clients:processResources UP-TO-DATE
> :kafka-trunk-jdk8:clients:classes
> :kafka-trunk-jdk8:clients:determineCommitId UP-TO-DATE
> :kafka-trunk-jdk8:clients:createVersionFile
> :kafka-trunk-jdk8:clients:jar
> :kafka-trunk-jdk8:log4j-appender:compileJavawarning: [options] bootstrap
> class path not set in conjunction with -source 1.7
> <
> https://builds.apache.org/job/kafka-trunk-jdk8/ws/log4j-appender/src/main/java/org/apache/kafka/log4jappender/KafkaLog4jAppender.java>:27:
> error: cannot find symbol
> import org.apache.kafka.common.config.SSLConfigs;
>  ^
>   symbol:   class SSLConfigs
>   location: package org.apache.kafka.common.config
> <
> https://builds.apache.org/job/kafka-trunk-jdk8/ws/log4j-appender/src/main/java/org/apache/kafka/log4jappender/KafkaLog4jAppender.java>:49:
> error: cannot find symbol
> private static final String SSL_TRUSTSTORE_LOCATION =
> SSLConfigs.SSL_TRUSTSTORE_LOCATION_CONFIG;
>   ^
>   symbol:   variable SSLConfigs
>   location: class KafkaLog4jAppender
> <
> https://builds.apache.org/job/kafka-trunk-jdk8/ws/log4j-appender/src/main/java/org/apache/kafka/log4jappender/KafkaLog4jAppender.java>:50:
> error: cannot find symbol
> private static 

[GitHub] kafka pull request: MINOR: Fix compiler error in `KafkaLog4jAppend...

2015-10-27 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/368

MINOR: Fix compiler error in `KafkaLog4jAppender`

The branch wasn't rebased after the capitalisation fix for SSL
classes.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
fix-kafka-log4j-appender-compiler-error

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/368.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #368


commit a5fd5b61803a20e7b36f93908866cfa5b7c5c4c7
Author: Ismael Juma 
Date:   2015-10-27T16:50:53Z

MINOR: Fix compiler error in `KafkaLog4jAppender`

The branch wasn't rebased after the capitalisation fix for SSL
classes.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk8 #61

2015-10-27 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-2447: Add capability to KafkaLog4jAppender to be able to use SSL

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-2 (docker Ubuntu ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision d21cb66e7d21ed3d20fc1e13b9a856f764bb4237 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f d21cb66e7d21ed3d20fc1e13b9a856f764bb4237
 > git rev-list 2fd645ac2fec7cf089cb8175ee47823b67a07226 # timeout=10
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson546249860918656535.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.1/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:downloadWrapper UP-TO-DATE

BUILD SUCCESSFUL

Total time: 21.349 secs
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson8832205166714546292.sh
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll docsJarAll 
testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.8/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:clean UP-TO-DATE
:clients:clean
:contrib:clean UP-TO-DATE
:copycat:clean UP-TO-DATE
:core:clean
:examples:clean
:log4j-appender:clean
:streams:clean
:tools:clean
:contrib:hadoop-consumer:clean
:contrib:hadoop-producer:clean
:copycat:api:clean
:copycat:file:clean
:copycat:json:clean
:copycat:runtime:clean
:jar_core_2_10_5
Building project 'core' with Scala version 2.10.5
:kafka-trunk-jdk8:clients:compileJavawarning: [options] bootstrap class path 
not set in conjunction with -source 1.7
Note: 

 uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
1 warning

:kafka-trunk-jdk8:clients:processResources UP-TO-DATE
:kafka-trunk-jdk8:clients:classes
:kafka-trunk-jdk8:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk8:clients:createVersionFile
:kafka-trunk-jdk8:clients:jar
:kafka-trunk-jdk8:log4j-appender:compileJavawarning: [options] bootstrap class 
path not set in conjunction with -source 1.7
:27:
 error: cannot find symbol
import org.apache.kafka.common.config.SSLConfigs;
 ^
  symbol:   class SSLConfigs
  location: package org.apache.kafka.common.config
:49:
 error: cannot find symbol
private static final String SSL_TRUSTSTORE_LOCATION = 
SSLConfigs.SSL_TRUSTSTORE_LOCATION_CONFIG;
  ^
  symbol:   variable SSLConfigs
  location: class KafkaLog4jAppender
:50:
 error: cannot find symbol
private static final String SSL_TRUSTSTORE_PASSWORD = 
SSLConfigs.SSL_TRUSTSTORE_PASSWORD_CONFIG;
  ^
  symbol:   variable SSLConfigs
  location: class KafkaLog4jAppender
:51:
 error: cannot find symbol
private static final String SSL_KEYSTORE_TYPE

[jira] [Commented] (KAFKA-2696) New KafkaProducer documentation doesn't include all necessary config properties

2015-10-27 Thread Edward Maxwell-Lyte (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976652#comment-14976652
 ] 

Edward Maxwell-Lyte commented on KAFKA-2696:


Great, will do.

> New KafkaProducer documentation doesn't include all necessary config 
> properties
> ---
>
> Key: KAFKA-2696
> URL: https://issues.apache.org/jira/browse/KAFKA-2696
> Project: Kafka
>  Issue Type: Bug
>  Components: website
>Affects Versions: 0.8.2.2
>Reporter: Edward Maxwell-Lyte
>
> It's missing the definitions for key.serializer and value.serializer. And it 
> would be good to highlight the necessary properties for the new KafkaProducer 
> to work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2696) New KafkaProducer documentation doesn't include all necessary config properties

2015-10-27 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976628#comment-14976628
 ] 

Gwen Shapira commented on KAFKA-2696:
-

If you'll look at ProducerConfig, you'll see it has a "main" method. Running 
this generates docs from the doc strings in the file. 
Those are added to the documentation as part of the release process (we have a 
JIRA open for adding this step to Gradle so it will happen on build).

So to improve the docs you want to edit the actual configuration file.

> New KafkaProducer documentation doesn't include all necessary config 
> properties
> ---
>
> Key: KAFKA-2696
> URL: https://issues.apache.org/jira/browse/KAFKA-2696
> Project: Kafka
>  Issue Type: Bug
>  Components: website
>Affects Versions: 0.8.2.2
>Reporter: Edward Maxwell-Lyte
>
> It's missing the definitions for key.serializer and value.serializer. And it 
> would be good to highlight the necessary properties for the new KafkaProducer 
> to work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2696: New KafkaProducer documentation do...

2015-10-27 Thread edwardmlyte
Github user edwardmlyte closed the pull request at:

https://github.com/apache/kafka/pull/367


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2696) New KafkaProducer documentation doesn't include all necessary config properties

2015-10-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976621#comment-14976621
 ] 

ASF GitHub Bot commented on KAFKA-2696:
---

Github user edwardmlyte closed the pull request at:

https://github.com/apache/kafka/pull/367


> New KafkaProducer documentation doesn't include all necessary config 
> properties
> ---
>
> Key: KAFKA-2696
> URL: https://issues.apache.org/jira/browse/KAFKA-2696
> Project: Kafka
>  Issue Type: Bug
>  Components: website
>Affects Versions: 0.8.2.2
>Reporter: Edward Maxwell-Lyte
>
> It's missing the definitions for key.serializer and value.serializer. And it 
> would be good to highlight the necessary properties for the new KafkaProducer 
> to work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2696) New KafkaProducer documentation doesn't include all necessary config properties

2015-10-27 Thread Edward Maxwell-Lyte (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976618#comment-14976618
 ] 

Edward Maxwell-Lyte commented on KAFKA-2696:


Ok, I'll remove the pull request. Any insight to how that magic works, so I can 
be more useful in the future?

> New KafkaProducer documentation doesn't include all necessary config 
> properties
> ---
>
> Key: KAFKA-2696
> URL: https://issues.apache.org/jira/browse/KAFKA-2696
> Project: Kafka
>  Issue Type: Bug
>  Components: website
>Affects Versions: 0.8.2.2
>Reporter: Edward Maxwell-Lyte
>
> It's missing the definitions for key.serializer and value.serializer. And it 
> would be good to highlight the necessary properties for the new KafkaProducer 
> to work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2696) New KafkaProducer documentation doesn't include all necessary config properties

2015-10-27 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976606#comment-14976606
 ] 

Gwen Shapira commented on KAFKA-2696:
-

These docs are generated "automagically" from the code. So fixing this in the 
docs won't do much... it will be resolved in the next release as the docs will 
be updated from latest code.

> New KafkaProducer documentation doesn't include all necessary config 
> properties
> ---
>
> Key: KAFKA-2696
> URL: https://issues.apache.org/jira/browse/KAFKA-2696
> Project: Kafka
>  Issue Type: Bug
>  Components: website
>Affects Versions: 0.8.2.2
>Reporter: Edward Maxwell-Lyte
>
> It's missing the definitions for key.serializer and value.serializer. And it 
> would be good to highlight the necessary properties for the new KafkaProducer 
> to work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #60

2015-10-27 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-2645: Document potentially breaking changes in the release 
note…

[cshapi] KAFKA-2516: Rename o.a.k.client.tools to o.a.k.tools

[cshapi] KAFKA-2452: Add new consumer option to mirror maker.

--
[...truncated 1730 lines...]
kafka.log.LogTest > testMessageSizeCheck PASSED

kafka.log.LogTest > testAsyncDelete PASSED

kafka.log.LogTest > testReadOutOfRange PASSED

kafka.log.LogTest > testReadAtLogGap PASSED

kafka.log.LogTest > testTimeBasedLogRoll PASSED

kafka.log.LogTest > testLoadEmptyLog PASSED

kafka.log.LogTest > testMessageSetSizeCheck PASSED

kafka.log.LogTest > testIndexResizingAtTruncation PASSED

kafka.log.LogTest > testCompactedTopicConstraints PASSED

kafka.log.LogTest > testThatGarbageCollectingSegmentsDoesntChangeOffset PASSED

kafka.log.LogTest > testAppendAndReadWithSequentialOffsets PASSED

kafka.log.LogTest > testParseTopicPartitionNameForNull PASSED

kafka.log.LogTest > testAppendAndReadWithNonSequentialOffsets PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingSeparator PASSED

kafka.log.LogTest > testCorruptIndexRebuild PASSED

kafka.log.LogTest > testBogusIndexSegmentsAreRemoved PASSED

kafka.log.LogTest > testCompressedMessages PASSED

kafka.log.LogTest > testAppendMessageWithNullPayload PASSED

kafka.log.LogTest > testCorruptLog PASSED

kafka.log.LogTest > testLogRecoversToCorrectOffset PASSED

kafka.log.LogTest > testReopenThenTruncate PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingPartition PASSED

kafka.log.LogTest > testParseTopicPartitionNameForEmptyName PASSED

kafka.log.LogTest > testOpenDeletesObsoleteFiles PASSED

kafka.log.LogTest > testSizeBasedLogRoll PASSED

kafka.log.LogTest > testTimeBasedLogRollJitter PASSED

kafka.log.LogTest > testParseTopicPartitionName PASSED

kafka.log.LogTest > testTruncateTo PASSED

kafka.log.LogTest > testCleanShutdownFile PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] PASSED

kafka.log.CleanerTest > testBuildOffsetMap PASSED

kafka.log.CleanerTest > testSegmentGrouping PASSED

kafka.log.CleanerTest > testCleanSegmentsWithAbort PASSED

kafka.log.CleanerTest > testSegmentGroupingWithSparseOffsets PASSED

kafka.log.CleanerTest > testRecoveryAfterCrash PASSED

kafka.log.CleanerTest > testCleaningWithDeletes PASSED

kafka.log.CleanerTest > testCleanSegments PASSED

kafka.log.CleanerTest > testCleaningWithUnkeyedMessages PASSED

kafka.controller.ControllerFailoverTest > testMetadataUpdate PASSED

kafka.javaapi.consumer.ZookeeperConsumerConnectorTest > testBasic PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytesWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > 
testIteratorIsConsistentWithCompression PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testIteratorIsConsistent PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testWrittenEqualsRead PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIp PASSED

kafka.network.SocketServerTest > simpleRequest PASSED

kafka.network.SocketServerTest > testSessionPrincipal PASSED

kafka.network.SocketServerTest > testSocketsCloseOnShutdown PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIPOverrides PASSED

kafka.network.SocketServerTest > testSslSocketServer PASSED

kafka.network.SocketServerTest > tooBigRequestIsRejected PASSED

kafka.integration.SaslSslTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.SaslSslTopicMetadataTest > testAutoCreateTopicWithCollision 
PASSED

kafka.integration.SaslSslTopicMetadataTest > testAliveBrokerListWithNoTopics 
PASSED

kafka.integration.SaslSslTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SaslSslTopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.SaslSslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.SaslSslTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.SaslSslTopicMetadataTest > testTopicMetadataRequest PASSED

kafka.integration.SaslSslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize PASSED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests PASSED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetc

[jira] [Commented] (KAFKA-2696) New KafkaProducer documentation doesn't include all necessary config properties

2015-10-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976600#comment-14976600
 ] 

ASF GitHub Bot commented on KAFKA-2696:
---

GitHub user edwardmlyte opened a pull request:

https://github.com/apache/kafka/pull/367

KAFKA-2696: New KafkaProducer documentation doesn't include all necessary 
config properties

Added in documentation to add missing properties, as well as highlight 
those minimum required properties.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/edwardmlyte/kafka docsUpdate

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/367.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #367


commit 68938ab2470c699a2471c5eacc7451cc122b330a
Author: edwardmlyte 
Date:   2015-10-27T15:38:50Z

KAFKA-2696: New KafkaProducer documentation doesn't include all necessary 
config properties

Added in documentation.




> New KafkaProducer documentation doesn't include all necessary config 
> properties
> ---
>
> Key: KAFKA-2696
> URL: https://issues.apache.org/jira/browse/KAFKA-2696
> Project: Kafka
>  Issue Type: Bug
>  Components: website
>Affects Versions: 0.8.2.2
>Reporter: Edward Maxwell-Lyte
>
> It's missing the definitions for key.serializer and value.serializer. And it 
> would be good to highlight the necessary properties for the new KafkaProducer 
> to work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2696: New KafkaProducer documentation do...

2015-10-27 Thread edwardmlyte
GitHub user edwardmlyte opened a pull request:

https://github.com/apache/kafka/pull/367

KAFKA-2696: New KafkaProducer documentation doesn't include all necessary 
config properties

Added in documentation to add missing properties, as well as highlight 
those minimum required properties.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/edwardmlyte/kafka docsUpdate

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/367.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #367


commit 68938ab2470c699a2471c5eacc7451cc122b330a
Author: edwardmlyte 
Date:   2015-10-27T15:38:50Z

KAFKA-2696: New KafkaProducer documentation doesn't include all necessary 
config properties

Added in documentation.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-2447) Add capability to KafkaLog4jAppender to be able to use SSL

2015-10-27 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2447:

   Resolution: Fixed
Fix Version/s: 0.9.0.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 175
[https://github.com/apache/kafka/pull/175]

> Add capability to KafkaLog4jAppender to be able to use SSL
> --
>
> Key: KAFKA-2447
> URL: https://issues.apache.org/jira/browse/KAFKA-2447
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
> Fix For: 0.9.0.0
>
>
> With Kafka supporting SSL, it makes sense to augment KafkaLog4jAppender to be 
> able to use SSL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2447) Add capability to KafkaLog4jAppender to be able to use SSL

2015-10-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976588#comment-14976588
 ] 

ASF GitHub Bot commented on KAFKA-2447:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/175


> Add capability to KafkaLog4jAppender to be able to use SSL
> --
>
> Key: KAFKA-2447
> URL: https://issues.apache.org/jira/browse/KAFKA-2447
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> With Kafka supporting SSL, it makes sense to augment KafkaLog4jAppender to be 
> able to use SSL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #723

2015-10-27 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-2645: Document potentially breaking changes in the release 
note…

[cshapi] KAFKA-2516: Rename o.a.k.client.tools to o.a.k.tools

[cshapi] KAFKA-2452: Add new consumer option to mirror maker.

--
[...truncated 324 lines...]
:kafka-trunk-jdk7:clients:createVersionFile
:kafka-trunk-jdk7:clients:jar UP-TO-DATE
:kafka-trunk-jdk7:clients:javadoc UP-TO-DATE
:kafka-trunk-jdk7:log4j-appender:compileJava UP-TO-DATE
:kafka-trunk-jdk7:log4j-appender:processResources UP-TO-DATE
:kafka-trunk-jdk7:log4j-appender:classes UP-TO-DATE
:kafka-trunk-jdk7:log4j-appender:jar UP-TO-DATE
:kafka-trunk-jdk7:core:compileJava UP-TO-DATE
:kafka-trunk-jdk7:core:compileScala UP-TO-DATE
:kafka-trunk-jdk7:core:processResources UP-TO-DATE
:kafka-trunk-jdk7:core:classes UP-TO-DATE
:kafka-trunk-jdk7:log4j-appender:javadoc
:kafka-trunk-jdk7:core:javadoc
:kafka-trunk-jdk7:core:javadocJar
:kafka-trunk-jdk7:core:scaladoc
[ant:scaladoc] Element 
' 
does not exist.
[ant:scaladoc] 
:293:
 warning: a pure expression does nothing in statement position; you may be 
omitting necessary parentheses
[ant:scaladoc] ControllerStats.uncleanLeaderElectionRate
[ant:scaladoc] ^
[ant:scaladoc] 
:294:
 warning: a pure expression does nothing in statement position; you may be 
omitting necessary parentheses
[ant:scaladoc] ControllerStats.leaderElectionTimer
[ant:scaladoc] ^
[ant:scaladoc] warning: there were 15 feature warning(s); re-run with -feature 
for details
[ant:scaladoc] 
:72:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#offer".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:32:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#offer".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:137:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#poll".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:120:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#poll".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:97:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#put".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:152:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#take".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 9 warnings found
:kafka-trunk-jdk7:core:scaladocJar
:kafka-trunk-jdk7:core:docsJar
:docsJar_2_11_7
Building project 'core' with Scala version 2.11.7
:kafka-trunk-jdk7:clients:compileJava UP-TO-DATE
:kafka-trunk-jdk7:clients:processResources UP-TO-DATE
:kafka-trunk-jdk7:clients:classes UP-TO-DATE
:kafka-trunk-jdk7:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk7:clients:createVersionFile
:kafka-trunk-jdk7:clients:jar UP-TO-DATE
:kafka-trunk-jdk7:clients:javadoc UP-TO-DATE
:kafka-trunk-jdk7:log4j-appender:compileJava UP-TO-DATE
:kafka-trunk-jdk7:log4j-appender:processResources UP-TO-DATE
:kafka-trunk-jdk7:log4j-appender:classes UP-TO-DATE
:kafka-trunk-jdk7:log4j-appender:jar UP-TO-DATE
:kafka-trunk-jdk7:core:compileJava UP-TO-DATE
:kafka-trunk-jdk7:core:compileScala
:78:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^
:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.request

[jira] [Created] (KAFKA-2696) New KafkaProducer documentation doesn't include all necessary config properties

2015-10-27 Thread Edward Maxwell-Lyte (JIRA)
Edward Maxwell-Lyte created KAFKA-2696:
--

 Summary: New KafkaProducer documentation doesn't include all 
necessary config properties
 Key: KAFKA-2696
 URL: https://issues.apache.org/jira/browse/KAFKA-2696
 Project: Kafka
  Issue Type: Bug
  Components: website
Affects Versions: 0.8.2.2
Reporter: Edward Maxwell-Lyte


It's missing the definitions for key.serializer and value.serializer. And it 
would be good to highlight the necessary properties for the new KafkaProducer 
to work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2447: Add capability to KafkaLog4jAppend...

2015-10-27 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/175


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-2452) enable new consumer in mirror maker

2015-10-27 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2452:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 266
[https://github.com/apache/kafka/pull/266]

> enable new consumer in mirror maker
> ---
>
> Key: KAFKA-2452
> URL: https://issues.apache.org/jira/browse/KAFKA-2452
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Jiangjie Qin
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> We need to an an option to enable the new consumer in mirror maker.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2452: Add new consumer option to mirror ...

2015-10-27 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/266


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2452) enable new consumer in mirror maker

2015-10-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976520#comment-14976520
 ] 

ASF GitHub Bot commented on KAFKA-2452:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/266


> enable new consumer in mirror maker
> ---
>
> Key: KAFKA-2452
> URL: https://issues.apache.org/jira/browse/KAFKA-2452
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Jiangjie Qin
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> We need to an an option to enable the new consumer in mirror maker.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2588) ReplicaManager partitionCount metric should actually be replicaCount

2015-10-27 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976519#comment-14976519
 ] 

Grant Henke commented on KAFKA-2588:


Thanks for the feedback [~gwenshap]. I used the aggregation example as a way to 
try and clarify my point briefly, but perhaps it muddied the water. The only 
reason I bring it up is because I have seen a few people confused by what this 
metric represents. Let me try and explain my understanding of the Kafka model 
and if things still don't align we can leave this alone. 

Note: This discuses the model at a user/logical level ignoring Java class names 
and implementation details.

In Kafka there are *topics* which are a _container_ that contains feeds of 
messages. Those topics have *partitions*. The partition is mainly a logical 
concept owned by a topic. There is no physical/real partition that can be owned 
by a broker. However, each partition has _n_ *replicas*, these replicas are are 
physically hosted by brokers and may be in-sync or out-of-sync (arguably not a 
replica anymore). Today, the logic that maps a physical replica to a logical 
partition is if the replica is the leader. Therefore, that is the closet thing 
to a partition that we can count on a per-broker basis. Otherwise describing 
how many partitions a broker has does not make much sense. Because topics have 
partitions not brokers.

This is why my change does not eliminate any useful metrics, but essentially 
renames them to make their values more clear. Below is the metric names, what I 
expect to see and what I currently see:

- replicaCount
   -- currently: does not exists
   -- with my change: added to contain what partitionCount used to. The count 
of replicas on that broker.
- partitionCount
  -- currently: the count of replicas on that broker
  -- with my change: the count replicas be served on that broker (synonymous 
with leaderCount)
- leaderCount 
  -- this did not change but partitionCount also holds this value


Based on your description above it looks like you expect partitionCount to 
represent the same thing I do:
{quote}
PartitionCount shows number of *partitions served* by the broker.
{quote}

Though based on my summary above, I would correct this part about 
leaders/followers to be:
{quote} 
...and since each -partition- *replica* can potentially become leader or 
follower...
{quote}

I hope this clarified my reasoning for the re-label of the metrics and why I 
think some users were confused. I apologize for being long winded. I wanted to 
be sure I was clear on my current understanding. Please don't hesitate to 
correct my mistakes. We can drop this if it still does not align with others 
perspectives.

> ReplicaManager partitionCount metric should actually be replicaCount
> 
>
> Key: KAFKA-2588
> URL: https://issues.apache.org/jira/browse/KAFKA-2588
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.1
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> The metrics "partitionCount" in the ReplicaManager actually represents the 
> count of replicas. 
> As an example if I have a cluster with 1 topic with 1 partitions and a 
> replication factor of 3. The metric (aggregated) would show a value of 3. 
> There is a metric called "LeaderCount" that actually represents the 
> "partitionCount". In my example above the metric (aggregated) would show a 
> value of 1. 
> We do need to consider compatibility of consuming systems. I think the most 
> simple change would be to:
> - Adjust the "partitionCount" metric to be the same value as "LeaderCount"
> - Add a "replicaCount" metric which contains the values "partitionCount" does 
> today
> - Leave "LeaderCount" in for compatibility
> Documentation will need to be updated as well. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2516) Rename o.a.k.client.tools to o.a.k.tools

2015-10-27 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2516:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 310
[https://github.com/apache/kafka/pull/310]

> Rename o.a.k.client.tools to o.a.k.tools
> 
>
> Key: KAFKA-2516
> URL: https://issues.apache.org/jira/browse/KAFKA-2516
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Grant Henke
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Currently our new performance tools are in o.a.k.client.tools but packaged in 
> kafka-tools not kafka-clients. This is a bit confusing.
> Since they deserve their own jar (you don't want our client tools packaged in 
> your app), lets give them a separate package and call it o.a.k.tools.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2516) Rename o.a.k.client.tools to o.a.k.tools

2015-10-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976500#comment-14976500
 ] 

ASF GitHub Bot commented on KAFKA-2516:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/310


> Rename o.a.k.client.tools to o.a.k.tools
> 
>
> Key: KAFKA-2516
> URL: https://issues.apache.org/jira/browse/KAFKA-2516
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Grant Henke
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Currently our new performance tools are in o.a.k.client.tools but packaged in 
> kafka-tools not kafka-clients. This is a bit confusing.
> Since they deserve their own jar (you don't want our client tools packaged in 
> your app), lets give them a separate package and call it o.a.k.tools.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2516: Rename o.a.k.client.tools to o.a.k...

2015-10-27 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/310


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-2645) Document potentially breaking changes in the release notes for 0.9.0

2015-10-27 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira resolved KAFKA-2645.
-
Resolution: Fixed

Issue resolved by pull request 337
[https://github.com/apache/kafka/pull/337]

> Document potentially breaking changes in the release notes for 0.9.0
> 
>
> Key: KAFKA-2645
> URL: https://issues.apache.org/jira/browse/KAFKA-2645
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Grant Henke
>Assignee: Grant Henke
>Priority: Blocker
> Fix For: 0.9.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2645) Document potentially breaking changes in the release notes for 0.9.0

2015-10-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976496#comment-14976496
 ] 

ASF GitHub Bot commented on KAFKA-2645:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/337


> Document potentially breaking changes in the release notes for 0.9.0
> 
>
> Key: KAFKA-2645
> URL: https://issues.apache.org/jira/browse/KAFKA-2645
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Grant Henke
>Assignee: Grant Henke
>Priority: Blocker
> Fix For: 0.9.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2645: Document potentially breaking chan...

2015-10-27 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/337


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2516) Rename o.a.k.client.tools to o.a.k.tools

2015-10-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976478#comment-14976478
 ] 

ASF GitHub Bot commented on KAFKA-2516:
---

GitHub user granthenke reopened a pull request:

https://github.com/apache/kafka/pull/310

KAFKA-2516: Rename o.a.k.client.tools to o.a.k.tools



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka tools-packaging

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/310.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #310


commit f1cf0a01fc4ea46a03bc0cbb37cdf763a91825e5
Author: Grant Henke 
Date:   2015-10-14T16:51:08Z

KAFKA-2516: Rename o.a.k.client.tools to o.a.k.tools




> Rename o.a.k.client.tools to o.a.k.tools
> 
>
> Key: KAFKA-2516
> URL: https://issues.apache.org/jira/browse/KAFKA-2516
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Grant Henke
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Currently our new performance tools are in o.a.k.client.tools but packaged in 
> kafka-tools not kafka-clients. This is a bit confusing.
> Since they deserve their own jar (you don't want our client tools packaged in 
> your app), lets give them a separate package and call it o.a.k.tools.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2516) Rename o.a.k.client.tools to o.a.k.tools

2015-10-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976477#comment-14976477
 ] 

ASF GitHub Bot commented on KAFKA-2516:
---

Github user granthenke closed the pull request at:

https://github.com/apache/kafka/pull/310


> Rename o.a.k.client.tools to o.a.k.tools
> 
>
> Key: KAFKA-2516
> URL: https://issues.apache.org/jira/browse/KAFKA-2516
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Grant Henke
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Currently our new performance tools are in o.a.k.client.tools but packaged in 
> kafka-tools not kafka-clients. This is a bit confusing.
> Since they deserve their own jar (you don't want our client tools packaged in 
> your app), lets give them a separate package and call it o.a.k.tools.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2516: Rename o.a.k.client.tools to o.a.k...

2015-10-27 Thread granthenke
GitHub user granthenke reopened a pull request:

https://github.com/apache/kafka/pull/310

KAFKA-2516: Rename o.a.k.client.tools to o.a.k.tools



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka tools-packaging

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/310.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #310


commit f1cf0a01fc4ea46a03bc0cbb37cdf763a91825e5
Author: Grant Henke 
Date:   2015-10-14T16:51:08Z

KAFKA-2516: Rename o.a.k.client.tools to o.a.k.tools




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-2516: Rename o.a.k.client.tools to o.a.k...

2015-10-27 Thread granthenke
Github user granthenke closed the pull request at:

https://github.com/apache/kafka/pull/310


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2675) SASL/Kerberos follow-up

2015-10-27 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976393#comment-14976393
 ] 

Ismael Juma commented on KAFKA-2675:


I asked [~fpj] to validate my reasoning above and he agreed. So, I'm going to 
proceed and remove the unused SASL_KAFKA_SERVER_REALM config in the PR for this 
JIRA. We can add it back when we have a concrete proposal on how to use it.

> SASL/Kerberos follow-up
> ---
>
> Key: KAFKA-2675
> URL: https://issues.apache.org/jira/browse/KAFKA-2675
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.9.0.0
>
>
> This is a follow-up to KAFKA-1686. 
> 1. Decide on `serviceName` configuration: do we want to keep it in two places?
> 2. auth.to.local config name is a bit opaque, is there a better one?
> 3. Implement or remove SASL_KAFKA_SERVER_REALM config
> 4. Consider making Login's thread a daemon thread
> 5. Write test that shows authentication failure due to invalid user
> 6. Write test that shows authentication failure due to wrong password
> 7. Write test that shows authentication failure due ticket expiring



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2690) Protect passwords from logging

2015-10-27 Thread Jakub Nowak (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakub Nowak reassigned KAFKA-2690:
--

Assignee: Jakub Nowak

> Protect passwords from logging
> --
>
> Key: KAFKA-2690
> URL: https://issues.apache.org/jira/browse/KAFKA-2690
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Jakub Nowak
> Fix For: 0.9.0.0
>
>
> We currently store the key (ssl.key.password), keystore 
> (ssl.keystore.password) and truststore (ssl.truststore.password) passwords as 
> a String in `KafkaConfig`, `ConsumerConfig` and `ProducerConfig`.
> The problem with this approach is that we may accidentally log the password 
> when logging the config.
> A possible solution is to introduce a new `ConfigDef.Type` that overrides 
> `toString` so that the value is hidden.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : kafka_system_tests #122

2015-10-27 Thread ewen
See 



[jira] [Commented] (KAFKA-2644) Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL

2015-10-27 Thread Rajini Sivaram (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14975908#comment-14975908
 ] 

Rajini Sivaram commented on KAFKA-2644:
---

[~geoffra] Thank you very much!

> Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL
> 
>
> Key: KAFKA-2644
> URL: https://issues.apache.org/jira/browse/KAFKA-2644
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Rajini Sivaram
>Priority: Critical
> Fix For: 0.9.0.0
>
>
> We need to define which of the existing ducktape tests are relevant. cc 
> [~rsivaram]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)