[jira] [Commented] (KAFKA-1856) Add PreCommit Patch Testing

2015-01-18 Thread Ashish Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282140#comment-14282140
 ] 

Ashish Kumar Singh commented on KAFKA-1856:
---

[~gwenshap] in that case current logic in the patch should just work fine. Will 
test it out on few more JIRAs. Fake JIRA is not required as I have an option of 
not posting the result.

 Add PreCommit Patch Testing
 ---

 Key: KAFKA-1856
 URL: https://issues.apache.org/jira/browse/KAFKA-1856
 Project: Kafka
  Issue Type: Task
Reporter: Ashish Kumar Singh
Assignee: Ashish Kumar Singh
 Attachments: KAFKA-1856.patch


 h1. Kafka PreCommit Patch Testing - *Don't wait for it to break*
 h2. Motivation
 *With great power comes great responsibility* - Uncle Ben. As Kafka user list 
 is growing, mechanism to ensure quality of the product is required. Quality 
 becomes hard to measure and maintain in an open source project, because of a 
 wide community of contributors. Luckily, Kafka is not the first open source 
 project and can benefit from learnings of prior projects.
 PreCommit tests are the tests that are run for each patch that gets attached 
 to an open JIRA. Based on tests results, test execution framework, test bot, 
 +1 or -1 the patch. Having PreCommit tests take the load off committers to 
 look at or test each patch.
 h2. Tests in Kafka
 h3. Unit and Integraiton Tests
 [Unit and Integration 
 tests|https://cwiki.apache.org/confluence/display/KAFKA/Kafka+0.9+Unit+and+Integration+Tests]
  are cardinal to help contributors to avoid breaking existing functionalities 
 while adding new functionalities or fixing older ones. These tests, atleast 
 the ones relevant to the changes, must be run by contributors before 
 attaching a patch to a JIRA.
 h3. System Tests
 [System 
 tests|https://cwiki.apache.org/confluence/display/KAFKA/Kafka+System+Tests] 
 are much wider tests that, unlike unit tests, focus on end-to-end scenarios 
 and not some specific method or class.
 h2. Apache PreCommit tests
 Apache provides a mechanism to automatically build a project and run a series 
 of tests whenever a patch is uploaded to a JIRA. Based on test execution, the 
 test framework will comment with a +1 or -1 on the JIRA.
 You can read more about the framework here:
 http://wiki.apache.org/general/PreCommitBuilds
 h2. Plan
 # Create a test-patch.py script (similar to the one used in Flume, Sqoop and 
 other projects) that will take a jira as a parameter, apply on the 
 appropriate branch, build the project, run tests and report results. This 
 script should be committed into the Kafka code-base. To begin with, this will 
 only run unit tests. We can add code sanity checks, system_tests, etc in the 
 future.
 # Create a jenkins job for running the test (as described in 
 http://wiki.apache.org/general/PreCommitBuilds) and validate that it works 
 manually. This must be done by a committer with Jenkins access.
 # Ask someone with access to https://builds.apache.org/job/PreCommit-Admin/ 
 to add Kafka to the list of projects PreCommit-Admin triggers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1876) pom file for scala 2.11 should reference a specific version

2015-01-18 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282083#comment-14282083
 ] 

Jun Rao commented on KAFKA-1876:


Created reviewboard https://reviews.apache.org/r/30019/diff/
 against branch origin/0.8.2

 pom file for scala 2.11 should reference a specific version
 ---

 Key: KAFKA-1876
 URL: https://issues.apache.org/jira/browse/KAFKA-1876
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.2
Reporter: Jun Rao
Assignee: Jun Rao
Priority: Blocker
 Fix For: 0.8.2

 Attachments: kafka-1876.patch


 Currently, the pom file specifies the following scala dependency for 2.11.
 dependency
   groupIdorg.scala-lang/groupId
   artifactIdscala-library/artifactId
   version2.11/version
   scopecompile/scope
 /dependency
 However, there is no 2.11 in maven central (there are only 2.11.1, 2.11.2, 
 etc).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1761) num.partitions documented default is 1 while actual default is 2

2015-01-18 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy updated KAFKA-1761:
---
Attachment: KAFKA-1761_2015-01-19_11:51:58.patch

 num.partitionsdocumented default is 1 while actual default is 2
 -

 Key: KAFKA-1761
 URL: https://issues.apache.org/jira/browse/KAFKA-1761
 Project: Kafka
  Issue Type: Bug
  Components: log
Affects Versions: 0.8.1.1
Reporter: Stevo Slavic
Assignee: Manikumar Reddy
Priority: Minor
 Attachments: KAFKA-1761.patch, KAFKA-1761_2015-01-19_11:51:58.patch


 Default {{num.partitions}} documented in 
 http://kafka.apache.org/08/configuration.html is 1, while server 
 configuration defaults same parameter to 2 (see 
 https://github.com/apache/kafka/blob/0.8.1/config/server.properties#L63 )
 Please have this inconsistency fixed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1761) num.partitions documented default is 1 while actual default is 2

2015-01-18 Thread Manikumar Reddy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282183#comment-14282183
 ] 

Manikumar Reddy commented on KAFKA-1761:


Updated reviewboard https://reviews.apache.org/r/30022/diff/
 against branch origin/trunk

 num.partitionsdocumented default is 1 while actual default is 2
 -

 Key: KAFKA-1761
 URL: https://issues.apache.org/jira/browse/KAFKA-1761
 Project: Kafka
  Issue Type: Bug
  Components: log
Affects Versions: 0.8.1.1
Reporter: Stevo Slavic
Assignee: Manikumar Reddy
Priority: Minor
 Attachments: KAFKA-1761.patch, KAFKA-1761_2015-01-19_11:51:58.patch


 Default {{num.partitions}} documented in 
 http://kafka.apache.org/08/configuration.html is 1, while server 
 configuration defaults same parameter to 2 (see 
 https://github.com/apache/kafka/blob/0.8.1/config/server.properties#L63 )
 Please have this inconsistency fixed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 30022: Patch for KAFKA-1761

2015-01-18 Thread Manikumar Reddy O

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30022/
---

(Updated Jan. 19, 2015, 6:23 a.m.)


Review request for kafka.


Bugs: KAFKA-1761
https://issues.apache.org/jira/browse/KAFKA-1761


Repository: kafka


Description
---

correct the default values of config properties


Diffs (updated)
-

  config/server.properties b0e4496a8ca736b6abe965a430e8ce87b0e8287f 
  core/src/main/scala/kafka/server/KafkaConfig.scala 
88689df718364f5a9bef143d4cb7e807a9251786 

Diff: https://reviews.apache.org/r/30022/diff/


Testing
---


Thanks,

Manikumar Reddy O



[jira] [Commented] (KAFKA-1760) Implement new consumer client

2015-01-18 Thread Jay Kreps (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282100#comment-14282100
 ] 

Jay Kreps commented on KAFKA-1760:
--

Updated reviewboard https://reviews.apache.org/r/27799/diff/
 against branch trunk

 Implement new consumer client
 -

 Key: KAFKA-1760
 URL: https://issues.apache.org/jira/browse/KAFKA-1760
 Project: Kafka
  Issue Type: Sub-task
  Components: consumer
Reporter: Jay Kreps
Assignee: Jay Kreps
 Fix For: 0.8.3

 Attachments: KAFKA-1760.patch, KAFKA-1760_2015-01-11_16:57:15.patch, 
 KAFKA-1760_2015-01-18_19:10:13.patch


 Implement a consumer client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 27799: Patch for KAFKA-1760

2015-01-18 Thread Jay Kreps

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27799/
---

(Updated Jan. 19, 2015, 3:10 a.m.)


Review request for kafka.


Bugs: KAFKA-1760
https://issues.apache.org/jira/browse/KAFKA-1760


Repository: kafka


Description
---

New consumer.


Diffs (updated)
-

  build.gradle c9ac43378c3bf5443f0f47c8ba76067237ecb348 
  clients/src/main/java/org/apache/kafka/clients/ClientRequest.java 
d32c319d8ee4c46dad309ea54b136ea9798e2fd7 
  clients/src/main/java/org/apache/kafka/clients/ClusterConnectionStates.java 
8aece7e81a804b177a6f2c12e2dc6c89c1613262 
  clients/src/main/java/org/apache/kafka/clients/CommonClientConfigs.java 
PRE-CREATION 
  clients/src/main/java/org/apache/kafka/clients/ConnectionState.java 
ab7e3220f9b76b92ef981d695299656f041ad5ed 
  clients/src/main/java/org/apache/kafka/clients/KafkaClient.java 
397695568d3fd8e835d8f923a89b3b00c96d0ead 
  clients/src/main/java/org/apache/kafka/clients/NetworkClient.java 
6746275d0b2596cd6ff7ce464a3a8225ad75ef00 
  clients/src/main/java/org/apache/kafka/clients/RequestCompletionHandler.java 
PRE-CREATION 
  clients/src/main/java/org/apache/kafka/clients/consumer/CommitType.java 
PRE-CREATION 
  clients/src/main/java/org/apache/kafka/clients/consumer/Consumer.java 
c0c636b3e1ba213033db6d23655032c9bbd5e378 
  clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java 
57c1807ccba9f264186f83e91f37c34b959c8060 
  
clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRebalanceCallback.java
 e4cf7d1cfa01c2844b53213a7b539cdcbcbeaa3a 
  clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRecord.java 
16af70a5de52cca786fdea147a6a639b7dc4a311 
  clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRecords.java 
bdf4b26942d5a8c8a9503e05908e9a9eff6228a7 
  clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
76efc216c9e6c3ab084461d792877092a189ad0f 
  clients/src/main/java/org/apache/kafka/clients/consumer/MockConsumer.java 
fa88ac1a8b19b4294f211c4467fe68c7707ddbae 
  
clients/src/main/java/org/apache/kafka/clients/consumer/NoOffsetForPartitionException.java
 PRE-CREATION 
  clients/src/main/java/org/apache/kafka/clients/consumer/OffsetMetadata.java 
ea423ad15eebd262d20d5ec05d592cc115229177 
  
clients/src/main/java/org/apache/kafka/clients/consumer/internals/Heartbeat.java
 PRE-CREATION 
  
clients/src/main/java/org/apache/kafka/clients/consumer/internals/NoOpConsumerRebalanceCallback.java
 PRE-CREATION 
  
clients/src/main/java/org/apache/kafka/clients/consumer/internals/SubscriptionState.java
 PRE-CREATION 
  clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
fc71710dd5997576d3841a1c3b0f7e19a8c9698e 
  clients/src/main/java/org/apache/kafka/clients/producer/MockProducer.java 
904976fadf0610982958628eaee810b60a98d725 
  clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
8b3e565edd1ae04d8d34bd9f1a41e9fa8c880a75 
  
clients/src/main/java/org/apache/kafka/clients/producer/internals/Metadata.java 
dcf46581b912cfb1b5c8d4cbc293d2d1444b7740 
  
clients/src/main/java/org/apache/kafka/clients/producer/internals/Partitioner.java
 483899d2e69b33655d0e08949f5f64af2519660a 
  clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java 
ccc03d8447ebba40131a70e16969686ac4aab58a 
  clients/src/main/java/org/apache/kafka/common/Cluster.java 
d3299b944062d96852452de455902659ad8af757 
  clients/src/main/java/org/apache/kafka/common/PartitionInfo.java 
b15aa2c3ef2d7c4b24618ff42fd4da324237a813 
  clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java 
98cb79b701918eca3f6ca9823b6c7b7c97b3ecec 
  clients/src/main/java/org/apache/kafka/common/errors/ApiException.java 
7c948b166a8ac07616809f260754116ae7764973 
  clients/src/main/java/org/apache/kafka/common/network/Selectable.java 
b68bbf00ab8eba6c5867d346c91188142593ca6e 
  clients/src/main/java/org/apache/kafka/common/network/Selector.java 
74d695ba39de44b6a3d15340ec0114bc4fce2ba2 
  clients/src/main/java/org/apache/kafka/common/protocol/Errors.java 
3316b6a1098311b8603a4a5893bf57b75d2e43cb 
  clients/src/main/java/org/apache/kafka/common/protocol/types/Struct.java 
121e880a941fcd3e6392859edba11a94236494cc 
  clients/src/main/java/org/apache/kafka/common/record/LogEntry.java 
e4d688cbe0c61b74ea15fc8dd3b634f9e5ee9b83 
  clients/src/main/java/org/apache/kafka/common/record/MemoryRecords.java 
040e5b91005edb8f015afdfa76fd94e0bf3cb4ca 
  clients/src/main/java/org/apache/kafka/common/requests/FetchRequest.java 
2fc471f64f4352eeb128bbd3941779780076fb8c 
  clients/src/main/java/org/apache/kafka/common/requests/ListOffsetRequest.java 
99364c1ca464f7b81be7d3da15b40ab66717a659 
  
clients/src/main/java/org/apache/kafka/common/requests/OffsetCommitRequest.java 
3ee5cbad55ce836fd04bb954dcf6ef2f9bc3288f 
  

[jira] [Commented] (KAFKA-1760) Implement new consumer client

2015-01-18 Thread Jay Kreps (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282101#comment-14282101
 ] 

Jay Kreps commented on KAFKA-1760:
--

Posted an updated patch rebased against trunk.

 Implement new consumer client
 -

 Key: KAFKA-1760
 URL: https://issues.apache.org/jira/browse/KAFKA-1760
 Project: Kafka
  Issue Type: Sub-task
  Components: consumer
Reporter: Jay Kreps
Assignee: Jay Kreps
 Fix For: 0.8.3

 Attachments: KAFKA-1760.patch, KAFKA-1760_2015-01-11_16:57:15.patch, 
 KAFKA-1760_2015-01-18_19:10:13.patch


 Implement a consumer client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1760) Implement new consumer client

2015-01-18 Thread Jay Kreps (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jay Kreps updated KAFKA-1760:
-
Attachment: KAFKA-1760_2015-01-18_19:10:13.patch

 Implement new consumer client
 -

 Key: KAFKA-1760
 URL: https://issues.apache.org/jira/browse/KAFKA-1760
 Project: Kafka
  Issue Type: Sub-task
  Components: consumer
Reporter: Jay Kreps
Assignee: Jay Kreps
 Fix For: 0.8.3

 Attachments: KAFKA-1760.patch, KAFKA-1760_2015-01-11_16:57:15.patch, 
 KAFKA-1760_2015-01-18_19:10:13.patch


 Implement a consumer client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-1875) Add info on PreCommit Patch Testing to wiki

2015-01-18 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282132#comment-14282132
 ] 

Gwen Shapira edited comment on KAFKA-1875 at 1/19/15 4:38 AM:
--

Documenting things is always a good idea :)

You don't actually need a JIRA to create / edit a wiki, you just need to:
1. Create a wiki user
2. Email the dev mailing list and ask for wiki edit privileges for your user

I think wiki is the right place to document the PreCommit Testing script. Its 
not part of the product Kafka so it doesn't belong in product docs.

Hope this helps.


was (Author: gwenshap):
You don't actually need a JIRA to create / edit a wiki, you just need to:
1. Create a wiki user
2. Email the dev mailing list and ask for wiki edit privileges for your user

I think wiki is the right place to document the PreCommit Testing script. Its 
not part of the product Kafka so it doesn't belong in product docs.

Hope this helps.

 Add info on PreCommit Patch Testing to wiki
 ---

 Key: KAFKA-1875
 URL: https://issues.apache.org/jira/browse/KAFKA-1875
 Project: Kafka
  Issue Type: Sub-task
Reporter: Ashish Kumar Singh
Assignee: Ashish Kumar Singh

 KAFKA-1856 adds PreCommit testing to Kafka project. A wiki page/ 
 documentation is required to outline various steps involved in PreCommit 
 Testing and how it works.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1875) Add info on PreCommit Patch Testing to wiki

2015-01-18 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282132#comment-14282132
 ] 

Gwen Shapira commented on KAFKA-1875:
-

You don't actually need a JIRA to create / edit a wiki, you just need to:
1. Create a wiki user
2. Email the dev mailing list and ask for wiki edit privileges for your user

I think wiki is the right place to document the PreCommit Testing script. Its 
not part of the product Kafka so it doesn't belong in product docs.

Hope this helps.

 Add info on PreCommit Patch Testing to wiki
 ---

 Key: KAFKA-1875
 URL: https://issues.apache.org/jira/browse/KAFKA-1875
 Project: Kafka
  Issue Type: Sub-task
Reporter: Ashish Kumar Singh
Assignee: Ashish Kumar Singh

 KAFKA-1856 adds PreCommit testing to Kafka project. A wiki page/ 
 documentation is required to outline various steps involved in PreCommit 
 Testing and how it works.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1875) Add info on PreCommit Patch Testing to wiki

2015-01-18 Thread Ashish Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282135#comment-14282135
 ] 

Ashish Kumar Singh commented on KAFKA-1875:
---

[~gwenshap] This is exactly what I intend to do :). This JIRA is a sub-task of 
KAFKA-1856, just to make sure it does not get skipped.

 Add info on PreCommit Patch Testing to wiki
 ---

 Key: KAFKA-1875
 URL: https://issues.apache.org/jira/browse/KAFKA-1875
 Project: Kafka
  Issue Type: Sub-task
Reporter: Ashish Kumar Singh
Assignee: Ashish Kumar Singh

 KAFKA-1856 adds PreCommit testing to Kafka project. A wiki page/ 
 documentation is required to outline various steps involved in PreCommit 
 Testing and how it works.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 30019: Patch for kafka-1876

2015-01-18 Thread Joe Stein

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30019/#review68587
---

Ship it!


Ship It!

- Joe Stein


On Jan. 19, 2015, 1:51 a.m., Jun Rao wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/30019/
 ---
 
 (Updated Jan. 19, 2015, 1:51 a.m.)
 
 
 Review request for kafka.
 
 
 Bugs: kafka-1876
 https://issues.apache.org/jira/browse/kafka-1876
 
 
 Repository: kafka
 
 
 Description
 ---
 
 bind to a specific version of scala 2.11
 
 
 Diffs
 -
 
   build.gradle c9ac43378c3bf5443f0f47c8ba76067237ecb348 
 
 Diff: https://reviews.apache.org/r/30019/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Jun Rao
 




Re: [VOTE] 0.8.2.0 Candidate 1

2015-01-18 Thread Joe Stein
It works ok in gradle but fails if your using maven.

taking a look at the patch you uploaded now.

/***
 Joe Stein
 Founder, Principal Consultant
 Big Data Open Source Security LLC
 http://www.stealth.ly
 Twitter: @allthingshadoop http://www.twitter.com/allthingshadoop
/

On Sun, Jan 18, 2015 at 8:59 PM, Jun Rao j...@confluent.io wrote:

 There seems to be an issue with the pom file for kafka_2.11-0.8.2 jar. It
 references scala-library 2.11, which doesn't exist in maven central
 (2.11.1, etc do exist). This seems to be an issue in the 0.8.2 beta as
 well. I tried to reference kafka_2.11-0.8.2 beta in a project and the build
 failed because scala-library:jar:2.11 doesn't exist. Filed KAFKA-1876 as an
 0.8.2 blocker. It would be great if people familiar with scala can take a
 look and see if this is a real issue.

 Thanks,

 Jun

 On Tue, Jan 13, 2015 at 7:16 PM, Jun Rao j...@confluent.io wrote:

  This is the first candidate for release of Apache Kafka 0.8.2.0. There
  has been some changes since the 0.8.2 beta release, especially in the new
  java producer api and jmx mbean names. It would be great if people can
 test
  this out thoroughly. We are giving people 10 days for testing and voting.
 
  Release Notes for the 0.8.2.0 release
  *
 https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/RELEASE_NOTES.html
  
 https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/RELEASE_NOTES.html
 *
 
  *** Please download, test and vote by Friday, Jan 23h, 7pm PT
 
  Kafka's KEYS file containing PGP keys we use to sign the release:
  *https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/KEYS
  https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/KEYS* in
  addition to the md5, sha1
  and sha2 (SHA256) checksum.
 
  * Release artifacts to be voted upon (source and binary):
  *https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/
  https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/*
 
  * Maven artifacts to be voted upon prior to release:
  *
 https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/maven_staging/
  
 https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/maven_staging/
 *
 
  * scala-doc
  *
 https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/scaladoc/#package
  
 https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/scaladoc/#package
 *
 
  * java-doc
  *https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/javadoc/
  https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/javadoc/*
 
  * The tag to be voted upon (off the 0.8.2 branch) is the 0.8.2.0 tag
  *
 https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=b0c7d579f8aeb5750573008040a42b7377a651d5
  
 https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=b0c7d579f8aeb5750573008040a42b7377a651d5
 *
 
  /***
 
  Thanks,
 
  Jun
 



[jira] [Updated] (KAFKA-1761) num.partitions documented default is 1 while actual default is 2

2015-01-18 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy updated KAFKA-1761:
---
Assignee: Manikumar Reddy  (was: Jay Kreps)
  Status: Patch Available  (was: Open)

 num.partitionsdocumented default is 1 while actual default is 2
 -

 Key: KAFKA-1761
 URL: https://issues.apache.org/jira/browse/KAFKA-1761
 Project: Kafka
  Issue Type: Bug
  Components: log
Affects Versions: 0.8.1.1
Reporter: Stevo Slavic
Assignee: Manikumar Reddy
Priority: Minor
 Attachments: KAFKA-1761.patch


 Default {{num.partitions}} documented in 
 http://kafka.apache.org/08/configuration.html is 1, while server 
 configuration defaults same parameter to 2 (see 
 https://github.com/apache/kafka/blob/0.8.1/config/server.properties#L63 )
 Please have this inconsistency fixed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1761) num.partitions documented default is 1 while actual default is 2

2015-01-18 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy updated KAFKA-1761:
---
Attachment: KAFKA-1761.patch

 num.partitionsdocumented default is 1 while actual default is 2
 -

 Key: KAFKA-1761
 URL: https://issues.apache.org/jira/browse/KAFKA-1761
 Project: Kafka
  Issue Type: Bug
  Components: log
Affects Versions: 0.8.1.1
Reporter: Stevo Slavic
Assignee: Jay Kreps
Priority: Minor
 Attachments: KAFKA-1761.patch


 Default {{num.partitions}} documented in 
 http://kafka.apache.org/08/configuration.html is 1, while server 
 configuration defaults same parameter to 2 (see 
 https://github.com/apache/kafka/blob/0.8.1/config/server.properties#L63 )
 Please have this inconsistency fixed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1761) num.partitions documented default is 1 while actual default is 2

2015-01-18 Thread Manikumar Reddy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282178#comment-14282178
 ] 

Manikumar Reddy commented on KAFKA-1761:


Created reviewboard https://reviews.apache.org/r/30022/diff/
 against branch origin/trunk

 num.partitionsdocumented default is 1 while actual default is 2
 -

 Key: KAFKA-1761
 URL: https://issues.apache.org/jira/browse/KAFKA-1761
 Project: Kafka
  Issue Type: Bug
  Components: log
Affects Versions: 0.8.1.1
Reporter: Stevo Slavic
Assignee: Jay Kreps
Priority: Minor
 Attachments: KAFKA-1761.patch


 Default {{num.partitions}} documented in 
 http://kafka.apache.org/08/configuration.html is 1, while server 
 configuration defaults same parameter to 2 (see 
 https://github.com/apache/kafka/blob/0.8.1/config/server.properties#L63 )
 Please have this inconsistency fixed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1856) Add PreCommit Patch Testing

2015-01-18 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282131#comment-14282131
 ] 

Gwen Shapira commented on KAFKA-1856:
-

Patches where A.B.C branch exist, should be applied on A.B.C branch. If A.B.C 
doesn't exist (0.8.3, 0.9), they should be applied on trunk. This should 
explain KAFKA-1694.

You can open a JIRA with a fake patch for the purpose of testing builds on 
different releases.


 Add PreCommit Patch Testing
 ---

 Key: KAFKA-1856
 URL: https://issues.apache.org/jira/browse/KAFKA-1856
 Project: Kafka
  Issue Type: Task
Reporter: Ashish Kumar Singh
Assignee: Ashish Kumar Singh
 Attachments: KAFKA-1856.patch


 h1. Kafka PreCommit Patch Testing - *Don't wait for it to break*
 h2. Motivation
 *With great power comes great responsibility* - Uncle Ben. As Kafka user list 
 is growing, mechanism to ensure quality of the product is required. Quality 
 becomes hard to measure and maintain in an open source project, because of a 
 wide community of contributors. Luckily, Kafka is not the first open source 
 project and can benefit from learnings of prior projects.
 PreCommit tests are the tests that are run for each patch that gets attached 
 to an open JIRA. Based on tests results, test execution framework, test bot, 
 +1 or -1 the patch. Having PreCommit tests take the load off committers to 
 look at or test each patch.
 h2. Tests in Kafka
 h3. Unit and Integraiton Tests
 [Unit and Integration 
 tests|https://cwiki.apache.org/confluence/display/KAFKA/Kafka+0.9+Unit+and+Integration+Tests]
  are cardinal to help contributors to avoid breaking existing functionalities 
 while adding new functionalities or fixing older ones. These tests, atleast 
 the ones relevant to the changes, must be run by contributors before 
 attaching a patch to a JIRA.
 h3. System Tests
 [System 
 tests|https://cwiki.apache.org/confluence/display/KAFKA/Kafka+System+Tests] 
 are much wider tests that, unlike unit tests, focus on end-to-end scenarios 
 and not some specific method or class.
 h2. Apache PreCommit tests
 Apache provides a mechanism to automatically build a project and run a series 
 of tests whenever a patch is uploaded to a JIRA. Based on test execution, the 
 test framework will comment with a +1 or -1 on the JIRA.
 You can read more about the framework here:
 http://wiki.apache.org/general/PreCommitBuilds
 h2. Plan
 # Create a test-patch.py script (similar to the one used in Flume, Sqoop and 
 other projects) that will take a jira as a parameter, apply on the 
 appropriate branch, build the project, run tests and report results. This 
 script should be committed into the Kafka code-base. To begin with, this will 
 only run unit tests. We can add code sanity checks, system_tests, etc in the 
 future.
 # Create a jenkins job for running the test (as described in 
 http://wiki.apache.org/general/PreCommitBuilds) and validate that it works 
 manually. This must be done by a committer with Jenkins access.
 # Ask someone with access to https://builds.apache.org/job/PreCommit-Admin/ 
 to add Kafka to the list of projects PreCommit-Admin triggers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1728) update 082 docs

2015-01-18 Thread Manikumar Reddy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282166#comment-14282166
 ] 

Manikumar Reddy commented on KAFKA-1728:



we need to update the default values of the following config properties in 
0.8.2 docs.

||Config property||Default value in code|| Default value in docs||
|background.threads | 10 | 4 |
|controller.message.queue.size| Int.MaxValue | 10 |
|fetch.purgatory.purge.interval.requests | 1000 | 1 |
|producer.purgatory.purge.interval.requests| 1000 | 1 |
|offset.metadata.max.bytes | 4096 | 1024 |
|log.cleaner.io.max.bytes.per.second|Double.MaxValue|None|
|log.flush.interval.messages|Long.MaxValue|None|
|log.flush.scheduler.interval.ms|Long.MaxValue|3000|
|log.flush.interval.ms|Long.MaxValue|3000|
|queued.max.message.chunks|2|10|


 update 082 docs
 ---

 Key: KAFKA-1728
 URL: https://issues.apache.org/jira/browse/KAFKA-1728
 Project: Kafka
  Issue Type: Task
Affects Versions: 0.8.2
Reporter: Jun Rao
Priority: Blocker
 Fix For: 0.8.2


 We need to update the docs for 082 release.
 https://svn.apache.org/repos/asf/kafka/site/082
 http://kafka.apache.org/082/documentation.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 30022: Patch for KAFKA-1761

2015-01-18 Thread Manikumar Reddy O

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30022/
---

Review request for kafka.


Bugs: KAFKA-1761
https://issues.apache.org/jira/browse/KAFKA-1761


Repository: kafka


Description
---

correct the default values of config properties


Diffs
-

  config/server.properties b0e4496a8ca736b6abe965a430e8ce87b0e8287f 
  core/src/main/scala/kafka/log/LogCleaner.scala 
f8e7cd5fabce78c248a9027c4bb374a792508675 
  core/src/main/scala/kafka/server/KafkaConfig.scala 
88689df718364f5a9bef143d4cb7e807a9251786 
  core/src/main/scala/kafka/tools/TestLogCleaning.scala 
af496f7c547a5ac7a4096a6af325dad0d8feec6f 
  core/src/test/scala/unit/kafka/log/LogCleanerIntegrationTest.scala 
07acd460b1259e0a3f4069b8b8dcd8123ef5810e 

Diff: https://reviews.apache.org/r/30022/diff/


Testing
---


Thanks,

Manikumar Reddy O



[jira] [Commented] (KAFKA-1761) num.partitions documented default is 1 while actual default is 2

2015-01-18 Thread Manikumar Reddy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282185#comment-14282185
 ] 

Manikumar Reddy commented on KAFKA-1761:


1. we need to update the default values of the following config properties in 
0.8.2 docs.

||Config property||Default value in code|| Default value in docs||
|background.threads | 10 | 4 |
|controller.message.queue.size| Int.MaxValue | 10 |
|fetch.purgatory.purge.interval.requests | 1000 | 1 |
|producer.purgatory.purge.interval.requests| 1000 | 1 |
|offset.metadata.max.bytes | 4096 | 1024 |
|log.cleaner.io.max.bytes.per.second|Double.MaxValue|None|
|log.flush.interval.messages|Long.MaxValue|None|
|log.flush.scheduler.interval.ms|Long.MaxValue|3000|
|log.flush.interval.ms|Long.MaxValue|3000|
|queued.max.message.chunks|2|10|

2. Set the default port to 9092 in code (As suggested by [~joestein])

3. The following needs to be corrected in server.properties

||Config property||Default value in code|| Default value in conf||
|zookeeper.connection.timeout.ms|   6000 |2000|
|socket.receive.buffer.bytes|102400| 65536|

Point 1 can be done part of KAFKA-1728. Uploaded a simple patch for Point 2 and 
3 


 num.partitionsdocumented default is 1 while actual default is 2
 -

 Key: KAFKA-1761
 URL: https://issues.apache.org/jira/browse/KAFKA-1761
 Project: Kafka
  Issue Type: Bug
  Components: log
Affects Versions: 0.8.1.1
Reporter: Stevo Slavic
Assignee: Manikumar Reddy
Priority: Minor
 Attachments: KAFKA-1761.patch, KAFKA-1761_2015-01-19_11:51:58.patch


 Default {{num.partitions}} documented in 
 http://kafka.apache.org/08/configuration.html is 1, while server 
 configuration defaults same parameter to 2 (see 
 https://github.com/apache/kafka/blob/0.8.1/config/server.properties#L63 )
 Please have this inconsistency fixed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-1876) pom file for scala 2.11 should reference a specific version

2015-01-18 Thread Jun Rao (JIRA)
Jun Rao created KAFKA-1876:
--

 Summary: pom file for scala 2.11 should reference a specific 
version
 Key: KAFKA-1876
 URL: https://issues.apache.org/jira/browse/KAFKA-1876
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.2
Reporter: Jun Rao
Assignee: Jun Rao
Priority: Blocker
 Fix For: 0.8.2


Currently, the pom file specifies the following scala dependency for 2.11.
dependency
  groupIdorg.scala-lang/groupId
  artifactIdscala-library/artifactId
  version2.11/version
  scopecompile/scope
/dependency
However, there is no 2.11 in maven central (there are only 2.11.1, 2.11.2, etc).




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 30019: Patch for kafka-1876

2015-01-18 Thread Jun Rao

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30019/
---

Review request for kafka.


Bugs: kafka-1876
https://issues.apache.org/jira/browse/kafka-1876


Repository: kafka


Description
---

bind to a specific version of scala 2.11


Diffs
-

  build.gradle c9ac43378c3bf5443f0f47c8ba76067237ecb348 

Diff: https://reviews.apache.org/r/30019/diff/


Testing
---


Thanks,

Jun Rao



[jira] [Updated] (KAFKA-1876) pom file for scala 2.11 should reference a specific version

2015-01-18 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-1876:
---
Attachment: kafka-1876.patch

 pom file for scala 2.11 should reference a specific version
 ---

 Key: KAFKA-1876
 URL: https://issues.apache.org/jira/browse/KAFKA-1876
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.2
Reporter: Jun Rao
Assignee: Jun Rao
Priority: Blocker
 Fix For: 0.8.2

 Attachments: kafka-1876.patch


 Currently, the pom file specifies the following scala dependency for 2.11.
 dependency
   groupIdorg.scala-lang/groupId
   artifactIdscala-library/artifactId
   version2.11/version
   scopecompile/scope
 /dependency
 However, there is no 2.11 in maven central (there are only 2.11.1, 2.11.2, 
 etc).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1869) Openning some random ports while running kafka service

2015-01-18 Thread QianHu (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

QianHu updated KAFKA-1869:
--
Description: 
while running kafka service , four  random ports have been opened . In which , 
 and 9092 are setted by myself , but  28538 and 16650 are opened randomly . 
Can you help me that why this random ports will be opened , and how can we give 
them constant values ? Thank you very much .
[work@02 kafka]$ jps
8400 Jps
727 Kafka
[work@02 kafka]$ netstat -tpln|grep 727
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
tcp0  0 0.0.0.0:0.0.0.0:*   
LISTEN  727/./bin/../jdk1.7 
tcp0  0 0.0.0.0:28538   0.0.0.0:*   
LISTEN  727/./bin/../jdk1.7 
tcp0  0 0.0.0.0:90920.0.0.0:*   
LISTEN  727/./bin/../jdk1.7 
tcp0  0 0.0.0.0:16650   0.0.0.0:*   
LISTEN  727/./bin/../jdk1.7 

  was:
while running kafka service , four  random ports hava been opened . In which , 
 and 9092 are setted by myself , but  28538 and 16650 are opened randomly . 
Can you help me that why this random ports will be opened , and how can we give 
them constant values ? Thank you very much .
[work@02 kafka]$ jps
8400 Jps
727 Kafka
[work@02 kafka]$ netstat -tpln|grep 727
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
tcp0  0 0.0.0.0:0.0.0.0:*   
LISTEN  727/./bin/../jdk1.7 
tcp0  0 0.0.0.0:28538   0.0.0.0:*   
LISTEN  727/./bin/../jdk1.7 
tcp0  0 0.0.0.0:90920.0.0.0:*   
LISTEN  727/./bin/../jdk1.7 
tcp0  0 0.0.0.0:16650   0.0.0.0:*   
LISTEN  727/./bin/../jdk1.7 


 Openning some random ports while running kafka service 
 ---

 Key: KAFKA-1869
 URL: https://issues.apache.org/jira/browse/KAFKA-1869
 Project: Kafka
  Issue Type: Bug
 Environment: kafka_2.9.2-0.8.1.1
Reporter: QianHu
Assignee: Manikumar Reddy
 Fix For: 0.8.2


 while running kafka service , four  random ports have been opened . In which 
 ,  and 9092 are setted by myself , but  28538 and 16650 are opened 
 randomly . Can you help me that why this random ports will be opened , and 
 how can we give them constant values ? Thank you very much .
 [work@02 kafka]$ jps
 8400 Jps
 727 Kafka
 [work@02 kafka]$ netstat -tpln|grep 727
 (Not all processes could be identified, non-owned process info
  will not be shown, you would have to be root to see it all.)
 tcp0  0 0.0.0.0:0.0.0.0:*   
 LISTEN  727/./bin/../jdk1.7 
 tcp0  0 0.0.0.0:28538   0.0.0.0:*   
 LISTEN  727/./bin/../jdk1.7 
 tcp0  0 0.0.0.0:90920.0.0.0:*   
 LISTEN  727/./bin/../jdk1.7 
 tcp0  0 0.0.0.0:16650   0.0.0.0:*   
 LISTEN  727/./bin/../jdk1.7 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] 0.8.2.0 Candidate 1

2015-01-18 Thread Jun Rao
There seems to be an issue with the pom file for kafka_2.11-0.8.2 jar. It
references scala-library 2.11, which doesn't exist in maven central
(2.11.1, etc do exist). This seems to be an issue in the 0.8.2 beta as
well. I tried to reference kafka_2.11-0.8.2 beta in a project and the build
failed because scala-library:jar:2.11 doesn't exist. Filed KAFKA-1876 as an
0.8.2 blocker. It would be great if people familiar with scala can take a
look and see if this is a real issue.

Thanks,

Jun

On Tue, Jan 13, 2015 at 7:16 PM, Jun Rao j...@confluent.io wrote:

 This is the first candidate for release of Apache Kafka 0.8.2.0. There
 has been some changes since the 0.8.2 beta release, especially in the new
 java producer api and jmx mbean names. It would be great if people can test
 this out thoroughly. We are giving people 10 days for testing and voting.

 Release Notes for the 0.8.2.0 release
 *https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/RELEASE_NOTES.html
 https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/RELEASE_NOTES.html*

 *** Please download, test and vote by Friday, Jan 23h, 7pm PT

 Kafka's KEYS file containing PGP keys we use to sign the release:
 *https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/KEYS
 https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/KEYS* in
 addition to the md5, sha1
 and sha2 (SHA256) checksum.

 * Release artifacts to be voted upon (source and binary):
 *https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/
 https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/*

 * Maven artifacts to be voted upon prior to release:
 *https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/maven_staging/
 https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/maven_staging/*

 * scala-doc
 *https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/scaladoc/#package
 https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/scaladoc/#package*

 * java-doc
 *https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/javadoc/
 https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/javadoc/*

 * The tag to be voted upon (off the 0.8.2 branch) is the 0.8.2.0 tag
 *https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=b0c7d579f8aeb5750573008040a42b7377a651d5
 https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=b0c7d579f8aeb5750573008040a42b7377a651d5*

 /***

 Thanks,

 Jun



[jira] [Commented] (KAFKA-1856) Add PreCommit Patch Testing

2015-01-18 Thread Ashish Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282164#comment-14282164
 ] 

Ashish Kumar Singh commented on KAFKA-1856:
---

Updated reviewboard https://reviews.apache.org/r/29893/
 against branch trunk

 Add PreCommit Patch Testing
 ---

 Key: KAFKA-1856
 URL: https://issues.apache.org/jira/browse/KAFKA-1856
 Project: Kafka
  Issue Type: Task
Reporter: Ashish Kumar Singh
Assignee: Ashish Kumar Singh
 Attachments: KAFKA-1856.patch, KAFKA-1856_2015-01-18_21:43:56.patch


 h1. Kafka PreCommit Patch Testing - *Don't wait for it to break*
 h2. Motivation
 *With great power comes great responsibility* - Uncle Ben. As Kafka user list 
 is growing, mechanism to ensure quality of the product is required. Quality 
 becomes hard to measure and maintain in an open source project, because of a 
 wide community of contributors. Luckily, Kafka is not the first open source 
 project and can benefit from learnings of prior projects.
 PreCommit tests are the tests that are run for each patch that gets attached 
 to an open JIRA. Based on tests results, test execution framework, test bot, 
 +1 or -1 the patch. Having PreCommit tests take the load off committers to 
 look at or test each patch.
 h2. Tests in Kafka
 h3. Unit and Integraiton Tests
 [Unit and Integration 
 tests|https://cwiki.apache.org/confluence/display/KAFKA/Kafka+0.9+Unit+and+Integration+Tests]
  are cardinal to help contributors to avoid breaking existing functionalities 
 while adding new functionalities or fixing older ones. These tests, atleast 
 the ones relevant to the changes, must be run by contributors before 
 attaching a patch to a JIRA.
 h3. System Tests
 [System 
 tests|https://cwiki.apache.org/confluence/display/KAFKA/Kafka+System+Tests] 
 are much wider tests that, unlike unit tests, focus on end-to-end scenarios 
 and not some specific method or class.
 h2. Apache PreCommit tests
 Apache provides a mechanism to automatically build a project and run a series 
 of tests whenever a patch is uploaded to a JIRA. Based on test execution, the 
 test framework will comment with a +1 or -1 on the JIRA.
 You can read more about the framework here:
 http://wiki.apache.org/general/PreCommitBuilds
 h2. Plan
 # Create a test-patch.py script (similar to the one used in Flume, Sqoop and 
 other projects) that will take a jira as a parameter, apply on the 
 appropriate branch, build the project, run tests and report results. This 
 script should be committed into the Kafka code-base. To begin with, this will 
 only run unit tests. We can add code sanity checks, system_tests, etc in the 
 future.
 # Create a jenkins job for running the test (as described in 
 http://wiki.apache.org/general/PreCommitBuilds) and validate that it works 
 manually. This must be done by a committer with Jenkins access.
 # Ask someone with access to https://builds.apache.org/job/PreCommit-Admin/ 
 to add Kafka to the list of projects PreCommit-Admin triggers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1856) Add PreCommit Patch Testing

2015-01-18 Thread Ashish Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Kumar Singh updated KAFKA-1856:
--
Attachment: KAFKA-1856_2015-01-18_21:43:56.patch

 Add PreCommit Patch Testing
 ---

 Key: KAFKA-1856
 URL: https://issues.apache.org/jira/browse/KAFKA-1856
 Project: Kafka
  Issue Type: Task
Reporter: Ashish Kumar Singh
Assignee: Ashish Kumar Singh
 Attachments: KAFKA-1856.patch, KAFKA-1856_2015-01-18_21:43:56.patch


 h1. Kafka PreCommit Patch Testing - *Don't wait for it to break*
 h2. Motivation
 *With great power comes great responsibility* - Uncle Ben. As Kafka user list 
 is growing, mechanism to ensure quality of the product is required. Quality 
 becomes hard to measure and maintain in an open source project, because of a 
 wide community of contributors. Luckily, Kafka is not the first open source 
 project and can benefit from learnings of prior projects.
 PreCommit tests are the tests that are run for each patch that gets attached 
 to an open JIRA. Based on tests results, test execution framework, test bot, 
 +1 or -1 the patch. Having PreCommit tests take the load off committers to 
 look at or test each patch.
 h2. Tests in Kafka
 h3. Unit and Integraiton Tests
 [Unit and Integration 
 tests|https://cwiki.apache.org/confluence/display/KAFKA/Kafka+0.9+Unit+and+Integration+Tests]
  are cardinal to help contributors to avoid breaking existing functionalities 
 while adding new functionalities or fixing older ones. These tests, atleast 
 the ones relevant to the changes, must be run by contributors before 
 attaching a patch to a JIRA.
 h3. System Tests
 [System 
 tests|https://cwiki.apache.org/confluence/display/KAFKA/Kafka+System+Tests] 
 are much wider tests that, unlike unit tests, focus on end-to-end scenarios 
 and not some specific method or class.
 h2. Apache PreCommit tests
 Apache provides a mechanism to automatically build a project and run a series 
 of tests whenever a patch is uploaded to a JIRA. Based on test execution, the 
 test framework will comment with a +1 or -1 on the JIRA.
 You can read more about the framework here:
 http://wiki.apache.org/general/PreCommitBuilds
 h2. Plan
 # Create a test-patch.py script (similar to the one used in Flume, Sqoop and 
 other projects) that will take a jira as a parameter, apply on the 
 appropriate branch, build the project, run tests and report results. This 
 script should be committed into the Kafka code-base. To begin with, this will 
 only run unit tests. We can add code sanity checks, system_tests, etc in the 
 future.
 # Create a jenkins job for running the test (as described in 
 http://wiki.apache.org/general/PreCommitBuilds) and validate that it works 
 manually. This must be done by a committer with Jenkins access.
 # Ask someone with access to https://builds.apache.org/job/PreCommit-Admin/ 
 to add Kafka to the list of projects PreCommit-Admin triggers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 29893: Patch for KAFKA-1856

2015-01-18 Thread Ashish Singh

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29893/
---

(Updated Jan. 19, 2015, 5:44 a.m.)


Review request for kafka.


Summary (updated)
-

Patch for KAFKA-1856


Bugs: KAFKA-1856
https://issues.apache.org/jira/browse/KAFKA-1856


Repository: kafka


Description
---

KAFKA-1856: Add PreCommit Patch Testing


Diffs (updated)
-

  dev-utils/test-patch.py PRE-CREATION 

Diff: https://reviews.apache.org/r/29893/diff/


Testing
---

Tested on KAFKA-1664, 
https://issues.apache.org/jira/browse/KAFKA-1664?focusedCommentId=14277439page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14277439

How to run:
python dev-utils/test-patch.py --defect KAFKA-1664 --username user_name 
--password password --run-tests --post-results


Thanks,

Ashish Singh



[jira] [Updated] (KAFKA-1876) pom file for scala 2.11 should reference a specific version

2015-01-18 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-1876:
---
Status: Patch Available  (was: Open)

 pom file for scala 2.11 should reference a specific version
 ---

 Key: KAFKA-1876
 URL: https://issues.apache.org/jira/browse/KAFKA-1876
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.2
Reporter: Jun Rao
Assignee: Jun Rao
Priority: Blocker
 Fix For: 0.8.2

 Attachments: kafka-1876.patch


 Currently, the pom file specifies the following scala dependency for 2.11.
 dependency
   groupIdorg.scala-lang/groupId
   artifactIdscala-library/artifactId
   version2.11/version
   scopecompile/scope
 /dependency
 However, there is no 2.11 in maven central (there are only 2.11.1, 2.11.2, 
 etc).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [kafka-clients] Re: Heads up: KAFKA-1697 - remove code related to ack1 on the broker

2015-01-18 Thread Gwen Shapira
Overall, agree on point #1, less sure on point #2.

1. Some protocols never ever add new errors, while others add errors
without bumping versions. HTTP is a good example of the second type.
HTTP-451 was added fairly recently, there are some errors specific to
NGINX, etc. No one cares. I think we should properly document in the
wire-protocol doc that new errors can be added, and I think we should
strongly suggest (and implement ourselves) that unknown error codes
should be shown to users (or at least logged), so they can be googled
and understood through our documentation.
In addition, hierarchy of error codes, so clients will know if an
error is retry-able just by looking at the code could be nice. Same
for adding an error string to the protocol. These are future
enhancements that should be discussed separately.

2. I think we want to allow admins to upgrade their Kafka brokers
without having to chase down clients in their organization and without
getting blamed if clients break. I think it makes sense to have one
version that will support existing behavior, but log warnings, so
admins will know about misbehaving clients and can track them down
before an upgrade that breaks them (or before the broken config causes
them to lose data!). Hopefully this is indeed a very rare behavior and
we are taking extra precaution for nothing, but I have customers where
one traumatic upgrade means they will never upgrade a Kafka again, so
I'm being conservative.

Gwen


On Sun, Jan 18, 2015 at 3:50 PM, Jun Rao j...@confluent.io wrote:
 Overall, I agree with Jay on both points.

 1. I think it's reasonable to add new error codes w/o bumping up the
 protocol version. In most cases, by adding new error codes, we are just
 refining the categorization of those unknown errors. So, a client shouldn't
 behave worse than before as long as unknown errors have been properly
 handled.

 2. I think it's reasonable to just document that 0.8.2 will be the last
 release that will support ack  1 and remove the support completely in trunk
 w/o bumping up the protocol. This is because (a) we never included ack  1
 explicitly in the documentation and so the usage should be limited; (2) ack
 1 doesn't provide the guarantee that people really want and so it
 shouldn't really be used.

 Thanks,

 Jun


 On Sun, Jan 18, 2015 at 11:03 AM, Jay Kreps jay.kr...@gmail.com wrote:

 Hey guys,

 I really think we are discussing two things here:

 How should we generally handle changes to the set of errors? Should
 introducing new errors be considered a protocol change or should we reserve
 the right to introduce new error codes?
 Given that this particular change is possibly incompatible, how should we
 handle it?

 I think it would be good for people who are responding here to be specific
 about which they are addressing.

 Here is what I think:

 1. Errors should be extensible within a protocol version.

 We should change the protocol documentation to list the errors that can be
 given back from each api, their meaning, and how to handle them, BUT we
 should explicitly state that the set of errors are open ended. That is we
 should reserve the right to introduce new errors and explicitly state that
 clients need a blanket unknown error handling mechanism. The error can
 link to the protocol definition (something like Unknown error 42, see
 protocol definition at http://link;). We could make this work really well by
 instructing all the clients to report the error in a very googlable way as
 Oracle does with their error format (e.g. ORA-32) so that if you ever get
 the raw error google will take you to the definition.

 I agree that a more rigid definition seems like right thing, but having
 just implemented two clients and spent a bunch of time on the server side, I
 think, it will work out poorly in practice. Here is why:

 I think we will make a lot of mistakes in nailing down the set of error
 codes up front and we will end up going through 3-4 churns of the protocol
 definition just realizing the set of errors that can be thrown. I think this
 churn will actually make life worse for clients that now have to figure out
 7 identical versions of the protocol and will be a mess in terms of testing
 on the server side. I actually know this to be true because while
 implementing the clients I tried to guess the errors that could be thrown,
 then checked my guess by close code inspection. It turned out that I always
 missed things in my belief about errors, but more importantly even after
 close code inspection I found tons of other errors in my stress testing.
 In practice error handling always involves calling out one or two
 meaningful failures that have special recovery and then a blanket case that
 just handles everything else. It's true that some clients may not have done
 this well, but I think it is for the best if they fix that.
 Reserving the right to add errors doesn't mean we will do it without care.
 We will think through each change and 

Re: Review Request 29893: KAFKA-1856: Add PreCommit Patch Testing

2015-01-18 Thread Ashish Singh


 On Jan. 17, 2015, 1:48 a.m., Gwen Shapira wrote:
  dev-utils/test-patch.py, lines 144-145
  https://reviews.apache.org/r/29893/diff/1/?file=821575#file821575line144
 
  Can you validate that the hack is still needed? I'm concerned that we 
  are dragging an old hack around that perhaps was fixed years ago...

Still required. JIRA apis does not list a way to get list of attachments from a 
JIRA.


 On Jan. 17, 2015, 1:48 a.m., Gwen Shapira wrote:
  dev-utils/test-patch.py, lines 173-179
  https://reviews.apache.org/r/29893/diff/1/?file=821575#file821575line173
 
  Where do these errors end up? will we see them in the JIRA? or in 
  Jenkins?

git_cleanup() is called after patchTesting has been completed, passed or 
failed, before exiting the process. It will only show up in Jenkins as it would 
not be related to the patch.


- Ashish


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29893/#review68534
---


On Jan. 14, 2015, 7:22 p.m., Ashish Singh wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/29893/
 ---
 
 (Updated Jan. 14, 2015, 7:22 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1856
 https://issues.apache.org/jira/browse/KAFKA-1856
 
 
 Repository: kafka
 
 
 Description
 ---
 
 KAFKA-1856: Add PreCommit Patch Testing
 
 
 Diffs
 -
 
   dev-utils/test-patch.py PRE-CREATION 
 
 Diff: https://reviews.apache.org/r/29893/diff/
 
 
 Testing
 ---
 
 Tested on KAFKA-1664, 
 https://issues.apache.org/jira/browse/KAFKA-1664?focusedCommentId=14277439page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14277439
 
 How to run:
 python dev-utils/test-patch.py --defect KAFKA-1664 --username user_name 
 --password password --run-tests --post-results
 
 
 Thanks,
 
 Ashish Singh
 




[jira] [Commented] (KAFKA-1856) Add PreCommit Patch Testing

2015-01-18 Thread Ashish Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282180#comment-14282180
 ] 

Ashish Kumar Singh commented on KAFKA-1856:
---

[~gwenshap] I have tested both cases, one where fix version is an existing 
branch and one where it is not.

 Add PreCommit Patch Testing
 ---

 Key: KAFKA-1856
 URL: https://issues.apache.org/jira/browse/KAFKA-1856
 Project: Kafka
  Issue Type: Task
Reporter: Ashish Kumar Singh
Assignee: Ashish Kumar Singh
 Attachments: KAFKA-1856.patch, KAFKA-1856_2015-01-18_21:43:56.patch


 h1. Kafka PreCommit Patch Testing - *Don't wait for it to break*
 h2. Motivation
 *With great power comes great responsibility* - Uncle Ben. As Kafka user list 
 is growing, mechanism to ensure quality of the product is required. Quality 
 becomes hard to measure and maintain in an open source project, because of a 
 wide community of contributors. Luckily, Kafka is not the first open source 
 project and can benefit from learnings of prior projects.
 PreCommit tests are the tests that are run for each patch that gets attached 
 to an open JIRA. Based on tests results, test execution framework, test bot, 
 +1 or -1 the patch. Having PreCommit tests take the load off committers to 
 look at or test each patch.
 h2. Tests in Kafka
 h3. Unit and Integraiton Tests
 [Unit and Integration 
 tests|https://cwiki.apache.org/confluence/display/KAFKA/Kafka+0.9+Unit+and+Integration+Tests]
  are cardinal to help contributors to avoid breaking existing functionalities 
 while adding new functionalities or fixing older ones. These tests, atleast 
 the ones relevant to the changes, must be run by contributors before 
 attaching a patch to a JIRA.
 h3. System Tests
 [System 
 tests|https://cwiki.apache.org/confluence/display/KAFKA/Kafka+System+Tests] 
 are much wider tests that, unlike unit tests, focus on end-to-end scenarios 
 and not some specific method or class.
 h2. Apache PreCommit tests
 Apache provides a mechanism to automatically build a project and run a series 
 of tests whenever a patch is uploaded to a JIRA. Based on test execution, the 
 test framework will comment with a +1 or -1 on the JIRA.
 You can read more about the framework here:
 http://wiki.apache.org/general/PreCommitBuilds
 h2. Plan
 # Create a test-patch.py script (similar to the one used in Flume, Sqoop and 
 other projects) that will take a jira as a parameter, apply on the 
 appropriate branch, build the project, run tests and report results. This 
 script should be committed into the Kafka code-base. To begin with, this will 
 only run unit tests. We can add code sanity checks, system_tests, etc in the 
 future.
 # Create a jenkins job for running the test (as described in 
 http://wiki.apache.org/general/PreCommitBuilds) and validate that it works 
 manually. This must be done by a committer with Jenkins access.
 # Ask someone with access to https://builds.apache.org/job/PreCommit-Admin/ 
 to add Kafka to the list of projects PreCommit-Admin triggers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1869) Openning some random ports while running kafka service

2015-01-18 Thread QianHu (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282209#comment-14282209
 ] 

QianHu commented on KAFKA-1869:
---

Thank you !!!

 Openning some random ports while running kafka service 
 ---

 Key: KAFKA-1869
 URL: https://issues.apache.org/jira/browse/KAFKA-1869
 Project: Kafka
  Issue Type: Bug
 Environment: kafka_2.9.2-0.8.1.1
Reporter: QianHu
Assignee: Manikumar Reddy
 Fix For: 0.8.2


 while running kafka service , four  random ports have been opened . In which 
 ,  and 9092 are setted by myself , but  28538 and 16650 are opened 
 randomly . Can you help me that why this random ports will be opened , and 
 how can we give them constant values ? Thank you very much .
 [work@02 kafka]$ jps
 8400 Jps
 727 Kafka
 [work@02 kafka]$ netstat -tpln|grep 727
 (Not all processes could be identified, non-owned process info
  will not be shown, you would have to be root to see it all.)
 tcp0  0 0.0.0.0:0.0.0.0:*   
 LISTEN  727/./bin/../jdk1.7 
 tcp0  0 0.0.0.0:28538   0.0.0.0:*   
 LISTEN  727/./bin/../jdk1.7 
 tcp0  0 0.0.0.0:90920.0.0.0:*   
 LISTEN  727/./bin/../jdk1.7 
 tcp0  0 0.0.0.0:16650   0.0.0.0:*   
 LISTEN  727/./bin/../jdk1.7 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1722) static analysis code coverage for pci audit needs

2015-01-18 Thread Ashish Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282051#comment-14282051
 ] 

Ashish Kumar Singh commented on KAFKA-1722:
---

Coverity does not support scala at all.
Coverall only has a sbt plugin for scala.

Using Coverall would have been really nice had their been a way to get scala 
coverage. Also I am not sure if coverall provides a way to manage multi module 
project with projects in different languages. As Kafka uses Gradle as a build 
tool and has most of its code in Scala, I do not think Coverall or Coverity 
will serve the purpose here.

For the scope of this JIRA, I believe having a way to generate coverage 
manually should suffice. Automating it should not be a big deal once we have 
this. Instrumentation and scanning will definitely take extra time, but I do 
not think its significant. I am not sure how review becomes hard if you get 
additional info on code coverage. If a piece of code is optimized and is tested 
code coverage can only increase.

 static analysis code coverage for pci audit needs
 -

 Key: KAFKA-1722
 URL: https://issues.apache.org/jira/browse/KAFKA-1722
 Project: Kafka
  Issue Type: Bug
  Components: security
Reporter: Joe Stein
Assignee: Ashish Kumar Singh
 Fix For: 0.9.0


 Code coverage is a measure used to describe the degree to which the source 
 code of a product is tested. A product with high code coverage has been more 
 thoroughly tested and has a lower chance of containing software bugs than a 
 product with low code coverage. Apart from PCI audit needs, increasing user 
 base of Kafka makes it important to increase code coverage of Kafka. 
 Something just can not be improved without being measured.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-1722) static analysis code coverage for pci audit needs

2015-01-18 Thread Ashish Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282051#comment-14282051
 ] 

Ashish Kumar Singh edited comment on KAFKA-1722 at 1/19/15 12:56 AM:
-

Coverity does not support scala at all.
Coverall only has a sbt plugin for scala.

Using Coverall would have been really nice had their been a way to get scala 
coverage. Also I am not sure if coverall provides a way to manage multi module 
project with modules in different languages. As Kafka uses Gradle as a build 
tool and has most of its code in Scala, I do not think Coverall or Coverity 
will serve the purpose here.

For the scope of this JIRA, I believe having a way to generate coverage 
manually should suffice. Automating it should not be a big deal once we have 
this. Instrumentation and scanning will definitely take extra time, but I do 
not think its significant. I am not sure how review becomes hard if you get 
additional info on code coverage. If a piece of code is optimized and is tested 
code coverage can only increase.


was (Author: singhashish):
Coverity does not support scala at all.
Coverall only has a sbt plugin for scala.

Using Coverall would have been really nice had their been a way to get scala 
coverage. Also I am not sure if coverall provides a way to manage multi module 
project with projects in different languages. As Kafka uses Gradle as a build 
tool and has most of its code in Scala, I do not think Coverall or Coverity 
will serve the purpose here.

For the scope of this JIRA, I believe having a way to generate coverage 
manually should suffice. Automating it should not be a big deal once we have 
this. Instrumentation and scanning will definitely take extra time, but I do 
not think its significant. I am not sure how review becomes hard if you get 
additional info on code coverage. If a piece of code is optimized and is tested 
code coverage can only increase.

 static analysis code coverage for pci audit needs
 -

 Key: KAFKA-1722
 URL: https://issues.apache.org/jira/browse/KAFKA-1722
 Project: Kafka
  Issue Type: Bug
  Components: security
Reporter: Joe Stein
Assignee: Ashish Kumar Singh
 Fix For: 0.9.0


 Code coverage is a measure used to describe the degree to which the source 
 code of a product is tested. A product with high code coverage has been more 
 thoroughly tested and has a lower chance of containing software bugs than a 
 product with low code coverage. Apart from PCI audit needs, increasing user 
 base of Kafka makes it important to increase code coverage of Kafka. 
 Something just can not be improved without being measured.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1866) LogStartOffset gauge throws exceptions after log.delete()

2015-01-18 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282056#comment-14282056
 ] 

Jun Rao commented on KAFKA-1866:


I think deleting a topic probably can reproduce the issue too.

 LogStartOffset gauge throws exceptions after log.delete()
 -

 Key: KAFKA-1866
 URL: https://issues.apache.org/jira/browse/KAFKA-1866
 Project: Kafka
  Issue Type: Bug
Reporter: Gian Merlino
Assignee: Sriharsha Chintalapani

 The LogStartOffset gauge does logSegments.head.baseOffset, which throws 
 NoSuchElementException on an empty list, which can occur after a delete() of 
 the log. This makes life harder for custom MetricsReporters, since they have 
 to deal with .value() possibly throwing an exception.
 Locally we're dealing with this by having Log.delete() also call removeMetric 
 on all the gauges. That also has the benefit of not having a bunch of metrics 
 floating around for logs that the broker is not actually handling.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1856) Add PreCommit Patch Testing

2015-01-18 Thread Ashish Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282060#comment-14282060
 ] 

Ashish Kumar Singh commented on KAFKA-1856:
---

[~joestein] thanks for the info here!

I was testing out preCommit testing patch on KAFKA-1694. I applied its latest 
patch on trunk successfully. However, compilation failed. Am I correct with 
my understanding that KAFKA-1694's patch must have been built on trunk?

Also, could you confirm that I can safely assume that all the patches must be 
applied to trunk. However, patches for which {{fixVersion}} is in the form of 
A.B.C.D, patch must be applied on A.B.C branch.

 Add PreCommit Patch Testing
 ---

 Key: KAFKA-1856
 URL: https://issues.apache.org/jira/browse/KAFKA-1856
 Project: Kafka
  Issue Type: Task
Reporter: Ashish Kumar Singh
Assignee: Ashish Kumar Singh
 Attachments: KAFKA-1856.patch


 h1. Kafka PreCommit Patch Testing - *Don't wait for it to break*
 h2. Motivation
 *With great power comes great responsibility* - Uncle Ben. As Kafka user list 
 is growing, mechanism to ensure quality of the product is required. Quality 
 becomes hard to measure and maintain in an open source project, because of a 
 wide community of contributors. Luckily, Kafka is not the first open source 
 project and can benefit from learnings of prior projects.
 PreCommit tests are the tests that are run for each patch that gets attached 
 to an open JIRA. Based on tests results, test execution framework, test bot, 
 +1 or -1 the patch. Having PreCommit tests take the load off committers to 
 look at or test each patch.
 h2. Tests in Kafka
 h3. Unit and Integraiton Tests
 [Unit and Integration 
 tests|https://cwiki.apache.org/confluence/display/KAFKA/Kafka+0.9+Unit+and+Integration+Tests]
  are cardinal to help contributors to avoid breaking existing functionalities 
 while adding new functionalities or fixing older ones. These tests, atleast 
 the ones relevant to the changes, must be run by contributors before 
 attaching a patch to a JIRA.
 h3. System Tests
 [System 
 tests|https://cwiki.apache.org/confluence/display/KAFKA/Kafka+System+Tests] 
 are much wider tests that, unlike unit tests, focus on end-to-end scenarios 
 and not some specific method or class.
 h2. Apache PreCommit tests
 Apache provides a mechanism to automatically build a project and run a series 
 of tests whenever a patch is uploaded to a JIRA. Based on test execution, the 
 test framework will comment with a +1 or -1 on the JIRA.
 You can read more about the framework here:
 http://wiki.apache.org/general/PreCommitBuilds
 h2. Plan
 # Create a test-patch.py script (similar to the one used in Flume, Sqoop and 
 other projects) that will take a jira as a parameter, apply on the 
 appropriate branch, build the project, run tests and report results. This 
 script should be committed into the Kafka code-base. To begin with, this will 
 only run unit tests. We can add code sanity checks, system_tests, etc in the 
 future.
 # Create a jenkins job for running the test (as described in 
 http://wiki.apache.org/general/PreCommitBuilds) and validate that it works 
 manually. This must be done by a committer with Jenkins access.
 # Ask someone with access to https://builds.apache.org/job/PreCommit-Admin/ 
 to add Kafka to the list of projects PreCommit-Admin triggers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-1875) Add info on PreCommit Patch Testing to wiki

2015-01-18 Thread Ashish Kumar Singh (JIRA)
Ashish Kumar Singh created KAFKA-1875:
-

 Summary: Add info on PreCommit Patch Testing to wiki
 Key: KAFKA-1875
 URL: https://issues.apache.org/jira/browse/KAFKA-1875
 Project: Kafka
  Issue Type: Sub-task
Reporter: Ashish Kumar Singh
Assignee: Ashish Kumar Singh


KAFKA-1856 adds PreCommit testing to Kafka project. A wiki page/ documentation 
is required to outline various steps involved in PreCommit Testing and how it 
works.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1855) Topic unusable after unsuccessful UpdateMetadataRequest

2015-01-18 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282073#comment-14282073
 ] 

Jun Rao commented on KAFKA-1855:


This could be related to KAFKA-1738. Could you try the latest 0.8.2 branch or 
0.8.2 rc1?

 Topic unusable after unsuccessful UpdateMetadataRequest
 ---

 Key: KAFKA-1855
 URL: https://issues.apache.org/jira/browse/KAFKA-1855
 Project: Kafka
  Issue Type: Bug
  Components: controller
Affects Versions: 0.8.2
Reporter: Henri Pihkala
 Fix For: 0.8.2


 Sometimes, seemingly randomly, topic creation/initialization might fail with 
 the following lines in controller.log. Other logs show no errors. When this 
 happens, the topic is unusable (gives UnknownTopicOrPartition for all 
 requests).
 For me this happens 5-10% of the time. Feels like it's more likely to happen 
 if there is time between topic creations. Observed on 0.8.2-beta, have not 
 tried previous versions.
 [2015-01-09 16:15:27,153] WARN [Controller-0-to-broker-0-send-thread], 
 Controller 0 fails to send a request to broker 
 id:0,host:192.168.10.21,port:9092 (kafka.controller.RequestSendThread)
 java.io.EOFException: Received -1 when reading from channel, socket has 
 likely been closed.
   at kafka.utils.Utils$.read(Utils.scala:381)
   at 
 kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:54)
   at kafka.network.Receive$class.readCompletely(Transmission.scala:56)
   at 
 kafka.network.BoundedByteBufferReceive.readCompletely(BoundedByteBufferReceive.scala:29)
   at kafka.network.BlockingChannel.receive(BlockingChannel.scala:108)
   at 
 kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:146)
   at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
 [2015-01-09 16:15:27,156] ERROR [Controller-0-to-broker-0-send-thread], 
 Controller 0 epoch 6 failed to send request 
 Name:UpdateMetadataRequest;Version:0;Controller:0;ControllerEpoch:6;CorrelationId:48;ClientId:id_0-host_192.168.10.21-port_9092;AliveBrokers:id:0,host:192.168.10.21,port:9092;PartitionState:[40963064-cdd2-4cd1-937a-9827d3ab77ad,0]
  - 
 (LeaderAndIsrInfo:(Leader:0,ISR:0,LeaderEpoch:0,ControllerEpoch:6),ReplicationFactor:1),AllReplicas:0)
  to broker id:0,host:192.168.10.21,port:9092. Reconnecting to broker. 
 (kafka.controller.RequestSendThread)
 java.nio.channels.ClosedChannelException
   at kafka.network.BlockingChannel.send(BlockingChannel.scala:97)
   at 
 kafka.controller.RequestSendThread.liftedTree1$1(ControllerChannelManager.scala:132)
   at 
 kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:131)
   at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 30013: Patch for KAFKA-1867

2015-01-18 Thread Jaikiran Pai

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30013/
---

Review request for kafka.


Bugs: KAFKA-1867
https://issues.apache.org/jira/browse/KAFKA-1867


Repository: kafka


Description
---

KAFKA-1867 Increase metadata fetch timeout for the producer targetting the 
offsets topic, because of the amount of time it takes to initialize the 
number of partitions of that topic


Diffs
-

  core/src/test/scala/integration/kafka/api/ProducerFailureHandlingTest.scala 
420a1dd30264c72704cc383a4161034c7922177d 

Diff: https://reviews.apache.org/r/30013/diff/


Testing
---


Thanks,

Jaikiran Pai



[jira] [Commented] (KAFKA-1867) liveBroker list not updated on a cluster with no topics

2015-01-18 Thread jaikiran pai (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14281822#comment-14281822
 ] 

jaikiran pai commented on KAFKA-1867:
-

Created reviewboard https://reviews.apache.org/r/30013/diff/
 against branch origin/trunk

 liveBroker list not updated on a cluster with no topics
 ---

 Key: KAFKA-1867
 URL: https://issues.apache.org/jira/browse/KAFKA-1867
 Project: Kafka
  Issue Type: Bug
Reporter: Jun Rao
Assignee: Sriharsha Chintalapani
 Fix For: 0.8.3

 Attachments: KAFKA-1867.patch, KAFKA-1867.patch


 Currently, when there is no topic in a cluster, the controller doesn't send 
 any UpdateMetadataRequest to the broker when it starts up. As a result, the 
 liveBroker list in metadataCache is empty. This means that we will return 
 incorrect broker list in TopicMetatadataResponse.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1867) liveBroker list not updated on a cluster with no topics

2015-01-18 Thread jaikiran pai (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaikiran pai updated KAFKA-1867:

Attachment: KAFKA-1867.patch

 liveBroker list not updated on a cluster with no topics
 ---

 Key: KAFKA-1867
 URL: https://issues.apache.org/jira/browse/KAFKA-1867
 Project: Kafka
  Issue Type: Bug
Reporter: Jun Rao
Assignee: Sriharsha Chintalapani
 Fix For: 0.8.3

 Attachments: KAFKA-1867.patch, KAFKA-1867.patch


 Currently, when there is no topic in a cluster, the controller doesn't send 
 any UpdateMetadataRequest to the broker when it starts up. As a result, the 
 liveBroker list in metadataCache is empty. This means that we will return 
 incorrect broker list in TopicMetatadataResponse.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: ProducerFailureHandlingTest.testCannotSendToInternalTopic is failing

2015-01-18 Thread Jaikiran Pai
I could reproduce this consistently when that test *method* is run 
individually. From what I could gather, the __consumer_offset topic 
(being accessed in that test) had 50 partitions (default) which took a 
while for each of them to be assigned a leader and do other 
initialization and that timed out the metadata update wait during the 
producer.send. I increased the metadata fetch timeout specifically for 
that producer in that test method and was able to get past this. I've 
sent a patch here https://reviews.apache.org/r/30013/



-Jaikiran

On Sunday 18 January 2015 12:30 AM, Manikumar Reddy wrote:

  I am consistently getting these errors. May be transient errors.

On Sun, Jan 18, 2015 at 12:05 AM, Harsha ka...@harsha.io wrote:


I don't see any failures in tests with the latest trunk or 0.8.2. I ran
it few times in a loop.
-Harsha

On Sat, Jan 17, 2015, at 08:38 AM, Manikumar Reddy wrote:

ProducerFailureHandlingTest.testCannotSendToInternalTopic is failing on
both 0.8.2 and trunk.

Error on 0.8.2:
kafka.api.ProducerFailureHandlingTest  testCannotSendToInternalTopic
FAILED
 java.util.concurrent.ExecutionException:
org.apache.kafka.common.errors.TimeoutException: Failed to update
metadata
after 3000 ms.
 at


org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.init(KafkaProducer.java:437)

 at


org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:352)

 at


org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:248)

 at


kafka.api.ProducerFailureHandlingTest.testCannotSendToInternalTopic(ProducerFailureHandlingTest.scala:309)

 Caused by:
 org.apache.kafka.common.errors.TimeoutException: Failed to update
metadata after 3000 ms.


Error on Trunk:
kafka.api.test.ProducerFailureHandlingTest 
testCannotSendToInternalTopic
FAILED
 java.lang.AssertionError: null
 at org.junit.Assert.fail(Assert.java:69)
 at org.junit.Assert.assertTrue(Assert.java:32)
 at org.junit.Assert.assertTrue(Assert.java:41)
 at


kafka.api.test.ProducerFailureHandlingTest.testCannotSendToInternalTopic(ProducerFailureHandlingTest.scala:312)





Re: ProducerFailureHandlingTest.testCannotSendToInternalTopic is failing

2015-01-18 Thread Harsha
Jaikiran,
   I can't reproduce the failure of the ProdcuerFailureHandlingTest.
   I ran the single test .  you probably are seeing some errors
   written to console when you use ./gradlew -i -Dsingle.test .
   These errors are expected in some unit tests as some of these
   test failure cases.
If you can reproduce this or even intermittent test failure can you
please open up a new JIRA and attach your patch there.
Your review patch is attached KAFKA-1867 which is a different issue.
Thanks,
Harsha

On Sun, Jan 18, 2015, at 07:16 AM, Jaikiran Pai wrote:
 I could reproduce this consistently when that test *method* is run 
 individually. From what I could gather, the __consumer_offset topic 
 (being accessed in that test) had 50 partitions (default) which took a 
 while for each of them to be assigned a leader and do other 
 initialization and that timed out the metadata update wait during the 
 producer.send. I increased the metadata fetch timeout specifically for 
 that producer in that test method and was able to get past this. I've 
 sent a patch here https://reviews.apache.org/r/30013/
 
 
 -Jaikiran
 
 On Sunday 18 January 2015 12:30 AM, Manikumar Reddy wrote:
I am consistently getting these errors. May be transient errors.
 
  On Sun, Jan 18, 2015 at 12:05 AM, Harsha ka...@harsha.io wrote:
 
  I don't see any failures in tests with the latest trunk or 0.8.2. I ran
  it few times in a loop.
  -Harsha
 
  On Sat, Jan 17, 2015, at 08:38 AM, Manikumar Reddy wrote:
  ProducerFailureHandlingTest.testCannotSendToInternalTopic is failing on
  both 0.8.2 and trunk.
 
  Error on 0.8.2:
  kafka.api.ProducerFailureHandlingTest  testCannotSendToInternalTopic
  FAILED
   java.util.concurrent.ExecutionException:
  org.apache.kafka.common.errors.TimeoutException: Failed to update
  metadata
  after 3000 ms.
   at
 
  org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.init(KafkaProducer.java:437)
   at
 
  org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:352)
   at
 
  org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:248)
   at
 
  kafka.api.ProducerFailureHandlingTest.testCannotSendToInternalTopic(ProducerFailureHandlingTest.scala:309)
   Caused by:
   org.apache.kafka.common.errors.TimeoutException: Failed to update
  metadata after 3000 ms.
 
 
  Error on Trunk:
  kafka.api.test.ProducerFailureHandlingTest 
  testCannotSendToInternalTopic
  FAILED
   java.lang.AssertionError: null
   at org.junit.Assert.fail(Assert.java:69)
   at org.junit.Assert.assertTrue(Assert.java:32)
   at org.junit.Assert.assertTrue(Assert.java:41)
   at
 
  kafka.api.test.ProducerFailureHandlingTest.testCannotSendToInternalTopic(ProducerFailureHandlingTest.scala:312)
 
 


[jira] [Commented] (KAFKA-1872) Update Developer Setup

2015-01-18 Thread Manikumar Reddy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14281803#comment-14281803
 ] 

Manikumar Reddy commented on KAFKA-1872:


Thanks for the document.  I was able to create the development setup. 

KAFKA-1873, KAFKA-1874 issues are due to the mismatch in configured scala 
versions of gradle build and eclipse scala installation.  We have configured 
2.10.4 as the default scala compiler version in gradle build. Pl check 
gradle.properties.  Eclipse default scala installation is 2.11.

After changing eclipse scala installation to 2.10.4, I have not faced 
KAFKA-1873, KAFKA-1874 errors. 

core project - properties - scala - Use Project Settings - set Scala 
Installation to 2.10.4

 Update Developer Setup
 --

 Key: KAFKA-1872
 URL: https://issues.apache.org/jira/browse/KAFKA-1872
 Project: Kafka
  Issue Type: Improvement
  Components: build
Affects Versions: 0.8.2
 Environment: Mac OSX Yosemite
 eclipse Mars M4
 Gradle 2
 Scala 2
 Git
Reporter: Sree Vaddi
Assignee: Sree Vaddi
  Labels: cwiki, development_environment, eclipse, git, gradle, 
 scala, setup
 Fix For: 0.8.2

   Original Estimate: 2h
  Remaining Estimate: 2h

 I setup my developer environment today and came up with an updated document.
 Update the CWiki page at 
 https://cwiki.apache.org/confluence/display/KAFKA/Developer+Setup
 OR create a new page:
 Update the site page at http://kafka.apache.org/code.html
 with the one created in previous step.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIPs

2015-01-18 Thread Jay Kreps
Great! Sounds like everyone is on the same page

   - I created a template page to make things easier. If you do Tools-Copy
   on this page you can just fill in the italic portions with your details.
   - I retrofitted KIP-1 to match this formatting
   - I added the metadata section people asked for (a link to the
   discussion, the JIRA, and the current status). Let's make sure we remember
   to update the current status as things are figured out.
   - Let's keep the discussion on the mailing list rather than on the wiki
   pages. It makes sense to do one or the other so all the comments are in one
   place and I think prior experience is that the wiki comments are the worse
   way.

I think it would be great do KIPs for some of the in-flight items folks
mentioned.

-Jay

On Sat, Jan 17, 2015 at 8:23 AM, Gwen Shapira gshap...@cloudera.com wrote:

 +1

 Will be happy to provide a KIP for the multiple-listeners patch.

 Gwen

 On Sat, Jan 17, 2015 at 8:10 AM, Joe Stein joe.st...@stealth.ly wrote:
  +1 to everything we have been saying and where this (has settled to)/(is
  settling to).
 
  I am sure other folks have some more feedback and think we should try to
  keep this discussion going if need be. I am also a firm believer of form
  following function so kicking the tires some to flesh out the details of
  this and have some organic growth with the process will be healthy and
  beneficial to the community.
 
  For my part, what I will do is open a few KIP based on some of the work I
  have been involved with for 0.8.3. Off the top of my head this would
  include 1) changes to re-assignment of partitions 2) kafka cli 3) global
  configs 4) security white list black list by ip 5) SSL 6) We probably
 will
  have lots of Security related KIPs and should treat them all individually
  when the time is appropriate 7) Kafka on Mesos.
 
  If someone else wants to jump in to start getting some of the security
 KIP
  that we are going to have in 0.8.3 I think that would be great (e.g.
  Multiple Listeners for Kafka Brokers). There are also a few other
 tickets I
  can think of that are important to have in the code in 0.8.3 that should
  have KIP also that I haven't really been involved in. I will take a few
  minutes and go through JIRA (one I can think of like auto assign id that
 is
  already committed I think) and ask for a KIP if appropriate or if I feel
  that I can write it up (both from a time and understanding perspective)
 do
  so.
 
  long story short, I encourage folks to start moving ahead with the KIP
 for
  0.8.3 as how we operate. any objections?
 
  On Fri, Jan 16, 2015 at 2:40 PM, Guozhang Wang wangg...@gmail.com
 wrote:
 
  +1 on the idea, and we could mutually link the KIP wiki page with the
 the
  created JIRA ticket (i.e. include the JIRA number on the page and the
 KIP
  url on the ticket description).
 
  Regarding the KIP process, probably we do not need two phase
 communication
  of a [DISCUSS] followed by [VOTE], as Jay said the voting should be
 clear
  while people discuss about that.
 
  About who should trigger the process, I think the only involved people
  would be 1) when the patch is submitted / or even the ticket is created,
  the assignee could choose to start the KIP process if she thought it is
  necessary; 2) the reviewer of the patch can also suggest starting KIP
  discussions.
 
  On Fri, Jan 16, 2015 at 10:49 AM, Gwen Shapira gshap...@cloudera.com
  wrote:
 
   +1 to Ewen's suggestions: Deprecation, status and version.
  
   Perhaps add the JIRA where the KIP was implemented to the metadata.
   This will help tie things together.
  
   On Fri, Jan 16, 2015 at 9:35 AM, Ewen Cheslack-Postava
   e...@confluent.io wrote:
I think adding a section about deprecation would be helpful. A good
fraction of the time I would expect the goal of a KIP is to fix or
   replace
older functionality that needs continued support for compatibility,
 but
should eventually be phased out. This helps Kafka devs understand
 how
   long
they'll end up supporting multiple versions of features and helps
 users
understand when they're going to have to make updates to their
   applications.
   
Less important but useful -- having a bit of standard metadata like
  PEPs
do. Two I think are important are status (if someone lands on the
 KIP
   page,
can they tell whether this KIP was ever completed?) and/or the
 version
   the
KIP was first released in.
   
   
   
On Fri, Jan 16, 2015 at 9:20 AM, Joel Koshy jjkosh...@gmail.com
  wrote:
   
I'm definitely +1 on the KIP concept. As Joe mentioned, we are
 already
doing this in one form or the other. However, IMO it is fairly ad
 hoc
- i.e., a combination of DISCUSS threads, jira comments, RB and
 code
comments, wikis and html documentation. In the past I have had to
 dig
into a bunch of these to try and figure out why something was
implemented a certain way. I think KIPs can help a lot here 

Re: [kafka-clients] Re: Heads up: KAFKA-1697 - remove code related to ack1 on the broker

2015-01-18 Thread Jay Kreps
Hey guys,

I really think we are discussing two things here:

   1. How should we generally handle changes to the set of errors? Should
   introducing new errors be considered a protocol change or should we reserve
   the right to introduce new error codes?
   2. Given that this particular change is possibly incompatible, how
   should we handle it?

I think it would be good for people who are responding here to be specific
about which they are addressing.

Here is what I think:

1. Errors should be extensible within a protocol version.

We should change the protocol documentation to list the errors that can be
given back from each api, their meaning, and how to handle them, BUT we
should explicitly state that the set of errors are open ended. That is we
should reserve the right to introduce new errors and explicitly state that
clients need a blanket unknown error handling mechanism. The error can
link to the protocol definition (something like Unknown error 42, see
protocol definition at http://link;). We could make this work really well
by instructing all the clients to report the error in a very googlable way
as Oracle does with their error format (e.g. ORA-32) so that if you ever
get the raw error google will take you to the definition.

I agree that a more rigid definition seems like right thing, but having
just implemented two clients and spent a bunch of time on the server side,
I think, it will work out poorly in practice. Here is why:

   - I think we will make a lot of mistakes in nailing down the set of
   error codes up front and we will end up going through 3-4 churns of the
   protocol definition just realizing the set of errors that can be thrown. I
   think this churn will actually make life worse for clients that now have to
   figure out 7 identical versions of the protocol and will be a mess in terms
   of testing on the server side. I actually know this to be true because
   while implementing the clients I tried to guess the errors that could be
   thrown, then checked my guess by close code inspection. It turned out that
   I always missed things in my belief about errors, but more importantly even
   after close code inspection I found tons of other errors in my stress
   testing.
   - In practice error handling always involves calling out one or two
   meaningful failures that have special recovery and then a blanket case that
   just handles everything else. It's true that some clients may not have done
   this well, but I think it is for the best if they fix that.
   - Reserving the right to add errors doesn't mean we will do it without
   care. We will think through each change and decide whether giving a little
   more precision in the error is worth the overhead and churn of a protocol
   version bump.

2. In this case in particular we should not introduce a new protocol version

In this particular case we are saying that acks  1 doesn't make sense and
we want to give an error to people specifying this so that they change
their configuration. This is a configuration that few people use and we
want to just make it an error. The bad behavior will just be that the error
will not be as good as it could be. I think that is a better tradeoff than
introducing a separate protocol version (this may be true of the java
clients too).

We will have lots of cases like this in the future and we aren't going to
want to churn the protocol for each of them. For example we previously had
to get more precise about which characters were legal and which illegal in
topic names.

-Jay

On Fri, Jan 16, 2015 at 11:55 AM, Gwen Shapira gshap...@cloudera.com
wrote:

 I updated the KIP: Using acks  1 in version 0 will log a WARN message
 in the broker about client using deprecated behavior (suggested by Joe
 in the JIRA, and I think it makes sense).

 Gwen

 On Fri, Jan 16, 2015 at 10:40 AM, Gwen Shapira gshap...@cloudera.com
 wrote:
  How about we continue the discussion on this thread, so we won't lose
  the context of this discussion, and put it up for VOTE when this has
  been finalized?
 
  On Fri, Jan 16, 2015 at 10:22 AM, Neha Narkhede n...@confluent.io
 wrote:
  Gwen,
 
  KIP write-up looks good. According to the rest of the KIP process
 proposal,
  would you like to start a DISCUSS/VOTE thread for it?
 
  Thanks,
  Neha
 
  On Fri, Jan 16, 2015 at 9:37 AM, Ewen Cheslack-Postava 
 e...@confluent.io
  wrote:
 
  Gwen -- KIP write up looks good. Deprecation schedule probably needs
 to be
  more specific, but I think that discussion probably needs to happen
 after a
  solution is agreed upon.
 
  Jay -- I think older clients will get a bad error message instead of a
  good one isn't what would be happening with this change. Previously
 they
  wouldn't have received an error and they would have been able to
 produce
  messages. After the change they'll just receive this new error message
  which their clients can't possibly handle gracefully since it didn't
 exist
  when the client was written. 

Re: Review Request 24214: Patch for KAFKA-1374

2015-01-18 Thread Eric Olander

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/24214/#review68569
---



core/src/test/scala/unit/kafka/log/LogCleanerIntegrationTest.scala
https://reviews.apache.org/r/24214/#comment112888

Could be simplified to just:
for (codec - CompressionType.values) yield Array(codec.name)


- Eric Olander


On Jan. 17, 2015, 6:53 p.m., Manikumar Reddy O wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/24214/
 ---
 
 (Updated Jan. 17, 2015, 6:53 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1374
 https://issues.apache.org/jira/browse/KAFKA-1374
 
 
 Repository: kafka
 
 
 Description
 ---
 
 Updating the rebased code
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/log/LogCleaner.scala 
 f8e7cd5fabce78c248a9027c4bb374a792508675 
   core/src/main/scala/kafka/tools/TestLogCleaning.scala 
 af496f7c547a5ac7a4096a6af325dad0d8feec6f 
   core/src/test/scala/unit/kafka/log/LogCleanerIntegrationTest.scala 
 07acd460b1259e0a3f4069b8b8dcd8123ef5810e 
 
 Diff: https://reviews.apache.org/r/24214/diff/
 
 
 Testing
 ---
 
 /*TestLogCleaning stress test output for compressed messages/
 
 Producing 10 messages...
 Logging produce requests to 
 /tmp/kafka-log-cleaner-produced-6014466306002699464.txt
 Sleeping for 120 seconds...
 Consuming messages...
 Logging consumed messages to 
 /tmp/kafka-log-cleaner-consumed-177538909590644701.txt
 10 rows of data produced, 13165 rows of data consumed (86.8% reduction).
 De-duplicating and validating output files...
 Validated 9005 values, 0 mismatches.
 
 Producing 100 messages...
 Logging produce requests to 
 /tmp/kafka-log-cleaner-produced-3298578695475992991.txt
 Sleeping for 120 seconds...
 Consuming messages...
 Logging consumed messages to 
 /tmp/kafka-log-cleaner-consumed-7192293977610206930.txt
 100 rows of data produced, 119926 rows of data consumed (88.0% reduction).
 De-duplicating and validating output files...
 Validated 89947 values, 0 mismatches.
 
 Producing 1000 messages...
 Logging produce requests to 
 /tmp/kafka-log-cleaner-produced-3336255463347572934.txt
 Sleeping for 120 seconds...
 Consuming messages...
 Logging consumed messages to 
 /tmp/kafka-log-cleaner-consumed-9149188270705707725.txt
 1000 rows of data produced, 1645281 rows of data consumed (83.5% 
 reduction).
 De-duplicating and validating output files...
 Validated 899853 values, 0 mismatches.
 
 
 /*TestLogCleaning stress test output for non-compressed messages*/
 
 Producing 10 messages...
 Logging produce requests to 
 /tmp/kafka-log-cleaner-produced-5174543709786189363.txt
 Sleeping for 120 seconds...
 Consuming messages...
 Logging consumed messages to 
 /tmp/kafka-log-cleaner-consumed-514345501144701.txt
 10 rows of data produced, 22775 rows of data consumed (77.2% reduction).
 De-duplicating and validating output files...
 Validated 17874 values, 0 mismatches.
 
 Producing 100 messages...
 Logging produce requests to 
 /tmp/kafka-log-cleaner-produced-7814446915546169271.txt
 Sleeping for 120 seconds...
 Consuming messages...
 Logging consumed messages to 
 /tmp/kafka-log-cleaner-consumed-5172557663160447626.txt
 100 rows of data produced, 129230 rows of data consumed (87.1% reduction).
 De-duplicating and validating output files...
 Validated 89947 values, 0 mismatches.
 
 Producing 1000 messages...
 Logging produce requests to 
 /tmp/kafka-log-cleaner-produced-6092986571905399164.txt
 Sleeping for 120 seconds...
 Consuming messages...
 Logging consumed messages to 
 /tmp/kafka-log-cleaner-consumed-63626021421841220.txt
 1000 rows of data produced, 1136608 rows of data consumed (88.6% 
 reduction).
 De-duplicating and validating output files...
 Validated 899853 values, 0 mismatches.
 
 
 Thanks,
 
 Manikumar Reddy O
 




[jira] [Commented] (KAFKA-1724) Errors after reboot in single node setup

2015-01-18 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14281884#comment-14281884
 ] 

Sriharsha Chintalapani commented on KAFKA-1724:
---

[~junrao] Can you please take a look at my reply to the review. Thanks.

 Errors after reboot in single node setup
 

 Key: KAFKA-1724
 URL: https://issues.apache.org/jira/browse/KAFKA-1724
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.2
Reporter: Ciprian Hacman
Assignee: Sriharsha Chintalapani
  Labels: newbie
 Fix For: 0.8.2

 Attachments: KAFKA-1724.patch


 In a single node setup, after reboot, Kafka logs show the following:
 {code}
 [2014-10-22 16:37:22,206] INFO [Controller 0]: Controller starting up 
 (kafka.controller.KafkaController)
 [2014-10-22 16:37:22,419] INFO [Controller 0]: Controller startup complete 
 (kafka.controller.KafkaController)
 [2014-10-22 16:37:22,554] INFO conflict in /brokers/ids/0 data: 
 {jmx_port:-1,timestamp:1413995842465,host:ip-10-91-142-54.eu-west-1.compute.internal,version:1,port:9092}
  stored data: 
 {jmx_port:-1,timestamp:1413994171579,host:ip-10-91-142-54.eu-west-1.compute.internal,version:1,port:9092}
  (kafka.utils.ZkUtils$)
 [2014-10-22 16:37:22,736] INFO I wrote this conflicted ephemeral node 
 [{jmx_port:-1,timestamp:1413995842465,host:ip-10-91-142-54.eu-west-1.compute.internal,version:1,port:9092}]
  at /brokers/ids/0 a while back in a different session, hence I will backoff 
 for this node to be deleted by Zookeeper and retry (kafka.utils.ZkUtils$)
 [2014-10-22 16:37:25,010] ERROR Error handling event ZkEvent[Data of 
 /controller changed sent to 
 kafka.server.ZookeeperLeaderElector$LeaderChangeListener@a6af882] 
 (org.I0Itec.zkclient.ZkEventThread)
 java.lang.IllegalStateException: Kafka scheduler has not been started
 at kafka.utils.KafkaScheduler.ensureStarted(KafkaScheduler.scala:114)
 at kafka.utils.KafkaScheduler.shutdown(KafkaScheduler.scala:86)
 at 
 kafka.controller.KafkaController.onControllerResignation(KafkaController.scala:350)
 at 
 kafka.controller.KafkaController$$anonfun$2.apply$mcV$sp(KafkaController.scala:162)
 at 
 kafka.server.ZookeeperLeaderElector$LeaderChangeListener$$anonfun$handleDataDeleted$1.apply$mcZ$sp(ZookeeperLeaderElector.scala:138)
 at 
 kafka.server.ZookeeperLeaderElector$LeaderChangeListener$$anonfun$handleDataDeleted$1.apply(ZookeeperLeaderElector.scala:134)
 at 
 kafka.server.ZookeeperLeaderElector$LeaderChangeListener$$anonfun$handleDataDeleted$1.apply(ZookeeperLeaderElector.scala:134)
 at kafka.utils.Utils$.inLock(Utils.scala:535)
 at 
 kafka.server.ZookeeperLeaderElector$LeaderChangeListener.handleDataDeleted(ZookeeperLeaderElector.scala:134)
 at org.I0Itec.zkclient.ZkClient$6.run(ZkClient.java:549)
 at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71)
 [2014-10-22 16:37:28,757] INFO Registered broker 0 at path /brokers/ids/0 
 with address ip-10-91-142-54.eu-west-1.compute.internal:9092. 
 (kafka.utils.ZkUtils$)
 [2014-10-22 16:37:28,849] INFO [Kafka Server 0], started 
 (kafka.server.KafkaServer)
 [2014-10-22 16:38:56,718] INFO Closing socket connection to /127.0.0.1. 
 (kafka.network.Processor)
 [2014-10-22 16:38:56,850] INFO Closing socket connection to /127.0.0.1. 
 (kafka.network.Processor)
 [2014-10-22 16:38:56,985] INFO Closing socket connection to /127.0.0.1. 
 (kafka.network.Processor)
 {code}
 The last log line repeats forever and is correlated with errors on the app 
 side.
 Restarting Kafka fixes the errors.
 Steps to reproduce (with help from the mailing list):
 # start zookeeper
 # start kafka-broker
 # create topic or start a producer writing to a topic
 # stop zookeeper
 # stop kafka-broker( kafka broker shutdown goes into  WARN Session
 0x14938d9dc010001 for server null, unexpected error, closing socket 
 connection and attempting reconnect (org.apache.zookeeper.ClientCnxn) 
 java.net.ConnectException: Connection refused)
 # kill -9 kafka-broker
 # restart zookeeper and than kafka-broker leads into the the error above



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1722) static analysis code coverage for pci audit needs

2015-01-18 Thread Ashish Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14281995#comment-14281995
 ] 

Ashish Kumar Singh commented on KAFKA-1722:
---

[~bosco] coverall supports scala, but it will also have the limitations I 
mentioned above. For automative coverage report, I was planning to put it as 
part of preCommit patch testing. For each patch contributor can know if the 
patch is decreasing/ increasing code coverage. If the patch decreases code 
coverage more than a threshold value, preCommit patch testing bot will give it 
a -1.

 static analysis code coverage for pci audit needs
 -

 Key: KAFKA-1722
 URL: https://issues.apache.org/jira/browse/KAFKA-1722
 Project: Kafka
  Issue Type: Bug
  Components: security
Reporter: Joe Stein
Assignee: Ashish Kumar Singh
 Fix For: 0.9.0


 Code coverage is a measure used to describe the degree to which the source 
 code of a product is tested. A product with high code coverage has been more 
 thoroughly tested and has a lower chance of containing software bugs than a 
 product with low code coverage. Apart from PCI audit needs, increasing user 
 base of Kafka makes it important to increase code coverage of Kafka. 
 Something just can not be improved without being measured.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1722) static analysis code coverage for pci audit needs

2015-01-18 Thread Don Bosco Durai (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282002#comment-14282002
 ] 

Don Bosco Durai commented on KAFKA-1722:


There few things to note here:
- Instrumentation and scanning takes significant amount time (at least in java)
- There is a upfront cost to review and write rules to eliminate false positives
- There is routine cost to eliminate false positives

If we can setup this process, then it will be very ideal and beneficial. It 
would be good to have an build option to optionally run the scanning before 
committing the code. 

Also, by increase/decrease code coverage, do you mean by number of lines or 
issues? Because number of lines can decrease if a piece of code is optimized.


 static analysis code coverage for pci audit needs
 -

 Key: KAFKA-1722
 URL: https://issues.apache.org/jira/browse/KAFKA-1722
 Project: Kafka
  Issue Type: Bug
  Components: security
Reporter: Joe Stein
Assignee: Ashish Kumar Singh
 Fix For: 0.9.0


 Code coverage is a measure used to describe the degree to which the source 
 code of a product is tested. A product with high code coverage has been more 
 thoroughly tested and has a lower chance of containing software bugs than a 
 product with low code coverage. Apart from PCI audit needs, increasing user 
 base of Kafka makes it important to increase code coverage of Kafka. 
 Something just can not be improved without being measured.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [kafka-clients] Re: Heads up: KAFKA-1697 - remove code related to ack1 on the broker

2015-01-18 Thread Jun Rao
Overall, I agree with Jay on both points.

1. I think it's reasonable to add new error codes w/o bumping up the
protocol version. In most cases, by adding new error codes, we are just
refining the categorization of those unknown errors. So, a client shouldn't
behave worse than before as long as unknown errors have been properly
handled.

2. I think it's reasonable to just document that 0.8.2 will be the last
release that will support ack  1 and remove the support completely in
trunk w/o bumping up the protocol. This is because (a) we never included
ack  1 explicitly in the documentation and so the usage should be limited;
(2) ack  1 doesn't provide the guarantee that people really want and so it
shouldn't really be used.

Thanks,

Jun


On Sun, Jan 18, 2015 at 11:03 AM, Jay Kreps jay.kr...@gmail.com wrote:

 Hey guys,

 I really think we are discussing two things here:

1. How should we generally handle changes to the set of errors? Should
introducing new errors be considered a protocol change or should we reserve
the right to introduce new error codes?
2. Given that this particular change is possibly incompatible, how
should we handle it?

 I think it would be good for people who are responding here to be specific
 about which they are addressing.

 Here is what I think:

 1. Errors should be extensible within a protocol version.

 We should change the protocol documentation to list the errors that can be
 given back from each api, their meaning, and how to handle them, BUT we
 should explicitly state that the set of errors are open ended. That is we
 should reserve the right to introduce new errors and explicitly state that
 clients need a blanket unknown error handling mechanism. The error can
 link to the protocol definition (something like Unknown error 42, see
 protocol definition at http://link;). We could make this work really well
 by instructing all the clients to report the error in a very googlable way
 as Oracle does with their error format (e.g. ORA-32) so that if you ever
 get the raw error google will take you to the definition.

 I agree that a more rigid definition seems like right thing, but having
 just implemented two clients and spent a bunch of time on the server side,
 I think, it will work out poorly in practice. Here is why:

- I think we will make a lot of mistakes in nailing down the set of
error codes up front and we will end up going through 3-4 churns of the
protocol definition just realizing the set of errors that can be thrown. I
think this churn will actually make life worse for clients that now have to
figure out 7 identical versions of the protocol and will be a mess in terms
of testing on the server side. I actually know this to be true because
while implementing the clients I tried to guess the errors that could be
thrown, then checked my guess by close code inspection. It turned out that
I always missed things in my belief about errors, but more importantly even
after close code inspection I found tons of other errors in my stress
testing.
- In practice error handling always involves calling out one or two
meaningful failures that have special recovery and then a blanket case that
just handles everything else. It's true that some clients may not have done
this well, but I think it is for the best if they fix that.
- Reserving the right to add errors doesn't mean we will do it without
care. We will think through each change and decide whether giving a little
more precision in the error is worth the overhead and churn of a protocol
version bump.

 2. In this case in particular we should not introduce a new protocol
 version

 In this particular case we are saying that acks  1 doesn't make sense and
 we want to give an error to people specifying this so that they change
 their configuration. This is a configuration that few people use and we
 want to just make it an error. The bad behavior will just be that the error
 will not be as good as it could be. I think that is a better tradeoff than
 introducing a separate protocol version (this may be true of the java
 clients too).

 We will have lots of cases like this in the future and we aren't going to
 want to churn the protocol for each of them. For example we previously had
 to get more precise about which characters were legal and which illegal in
 topic names.

 -Jay

 On Fri, Jan 16, 2015 at 11:55 AM, Gwen Shapira gshap...@cloudera.com
 wrote:

 I updated the KIP: Using acks  1 in version 0 will log a WARN message
 in the broker about client using deprecated behavior (suggested by Joe
 in the JIRA, and I think it makes sense).

 Gwen

 On Fri, Jan 16, 2015 at 10:40 AM, Gwen Shapira gshap...@cloudera.com
 wrote:
  How about we continue the discussion on this thread, so we won't lose
  the context of this discussion, and put it up for VOTE when this has
  been finalized?
 
  On Fri, Jan 16, 2015 at 10:22 AM, Neha Narkhede 

[jira] [Commented] (KAFKA-1722) static analysis code coverage for pci audit needs

2015-01-18 Thread Ashish Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14281977#comment-14281977
 ] 

Ashish Kumar Singh commented on KAFKA-1722:
---

After elaborate trials of various available tools and compatibility plugins, 
below is a brief summary.

*We need to measure code coverage of following modules*
# Core (in Scala, with a little Java code)
# Clients (in Java)
Other modules do not have tests.

*Lang specific coverage tools*
# Java, [JaCoCo|http://www.eclemma.org/jacoco/] appears to be a decent tool, 
which provides line and branch coverage.
# Scala, [Scoverage|http://scoverage.org/] provides line and branch coverage.

*Coverage summary*
[SonarQube|http://www.sonarqube.org/] is a widely used tool that provides the 
capability to merge compatibility reports form various modules and present an 
overall report. Sonar uses plugins to parse and understand coverage report of 
an underlying sub-module of a project. A project can have sub-modules with 
different coverage tools, i.e., in different languages. We need following 
plugins for Kafka.
# Sonar-Jacoco (v2.1)
# Sonar-scoverage-plugin

*Issues*
# Sonar-socverage-plugin depends on 
[scalac-scoverage-plugin|https://github.com/scoverage/scalac-scoverage-plugin]. 
scalac-scoverage-plugin can be used in a gradle project using 
[gradle-scoverage|https://github.com/scoverage/gradle-scoverage]. 
gradle-scoverage,as of now, only publishes html and cobertura report. However, 
scalac-scoverage-plugin needs scoverage report to be able to parse it.
In short, sonar can not report coverage for scala project as of now. A full 
coverage report does get generated for scala project, but it would not show up 
in overall report. I have discussed this with the collaborators of 
gradle-scoverage and they are working on it.
# Scala 2.10 is not supported by scalac-scoverage-plugin, [detailed 
discussion|https://github.com/scoverage/scalac-scoverage-plugin/blob/master/2.10.md].

*OK, so where do we stand*
We can generate coverage reports, with line and branch coverage included, for 
core and clients sub modules.
We can generate a sonar summary report for the project, but that will only 
include coverage of clients sub module.
Coverage report, web report, for core module will have to be browsed separately.
As soon as gradle-scoverage start publishing scoverage report, we can see 
core's coverage as well in the sonar summary report.

If this sounds ok then I can provide a patch.

 static analysis code coverage for pci audit needs
 -

 Key: KAFKA-1722
 URL: https://issues.apache.org/jira/browse/KAFKA-1722
 Project: Kafka
  Issue Type: Bug
  Components: security
Reporter: Joe Stein
Assignee: Ashish Kumar Singh
 Fix For: 0.9.0


 Code coverage is a measure used to describe the degree to which the source 
 code of a product is tested. A product with high code coverage has been more 
 thoroughly tested and has a lower chance of containing software bugs than a 
 product with low code coverage. Apart from PCI audit needs, increasing user 
 base of Kafka makes it important to increase code coverage of Kafka. 
 Something just can not be improved without being measured.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1722) static analysis code coverage for pci audit needs

2015-01-18 Thread Don Bosco Durai (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14281981#comment-14281981
 ] 

Don Bosco Durai commented on KAFKA-1722:


Ashish, Coverity is another option. They are free for open source projects. I 
have been scanning for most of the Hadoop projects.

There is already a project created for Kafka 
(https://scan.coverity.com/projects/1340). I am not sure who is the owner, but 
if you want I can investigate that path. 

I had checked with Coverity before and they don't support Scala yet. So it will 
be only for the java components.


 static analysis code coverage for pci audit needs
 -

 Key: KAFKA-1722
 URL: https://issues.apache.org/jira/browse/KAFKA-1722
 Project: Kafka
  Issue Type: Bug
  Components: security
Reporter: Joe Stein
Assignee: Ashish Kumar Singh
 Fix For: 0.9.0


 Code coverage is a measure used to describe the degree to which the source 
 code of a product is tested. A product with high code coverage has been more 
 thoroughly tested and has a lower chance of containing software bugs than a 
 product with low code coverage. Apart from PCI audit needs, increasing user 
 base of Kafka makes it important to increase code coverage of Kafka. 
 Something just can not be improved without being measured.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-1722) static analysis code coverage for pci audit needs

2015-01-18 Thread Ashish Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14281977#comment-14281977
 ] 

Ashish Kumar Singh edited comment on KAFKA-1722 at 1/18/15 10:18 PM:
-

After elaborate trials of various available tools and compatibility plugins, 
below is a brief summary.

*We need to measure code coverage of following modules*
# Core (in Scala, with a little Java code)
# Clients (in Java)
Other modules do not have tests.

*Lang specific coverage tools*
# Java, [JaCoCo|http://www.eclemma.org/jacoco/] appears to be a decent tool, 
which provides line and branch coverage.
# Scala, [Scoverage|http://scoverage.org/] provides line and branch coverage.

*Coverage summary*
[SonarQube|http://www.sonarqube.org/] is a widely used tool that provides the 
capability to merge compatibility reports form various modules and present an 
overall report. Sonar uses plugins to parse and understand coverage report of 
an underlying sub-module of a project. A project can have sub-modules with 
different coverage tools, i.e., in different languages. We need following 
plugins for Kafka.
# Sonar-Jacoco (v2.1)
# Sonar-scoverage-plugin

*Issues*
# Sonar-socverage-plugin depends on 
[scalac-scoverage-plugin|https://github.com/scoverage/scalac-scoverage-plugin]. 
scalac-scoverage-plugin can be used in a gradle project using 
[gradle-scoverage|https://github.com/scoverage/gradle-scoverage]. 
gradle-scoverage,as of now, only publishes html and cobertura report. However, 
scalac-scoverage-plugin needs scoverage report to be able to parse it.
In short, sonar can not report coverage for scala project as of now. A full 
coverage report does get generated for scala project, but it would not show up 
in overall report. I have discussed this with the collaborators of 
gradle-scoverage and they are working on it.
# Scala 2.10 is not supported by scalac-scoverage-plugin, [detailed 
discussion|https://github.com/scoverage/scalac-scoverage-plugin/blob/master/2.10.md].

*OK, so where do we stand*
We can generate coverage reports, with line and branch coverage included, for 
core and clients sub modules.
We can generate a sonar summary report for the project, but that will only 
include coverage of clients sub module.
Coverage report, web report, for core module will have to be browsed separately.
As soon as gradle-scoverage start publishing scoverage report, [tracked 
here|https://github.com/scoverage/scalac-scoverage-plugin/issues/81], we can 
see core's coverage as well in the sonar summary report.

If this sounds ok then I can provide a patch.


was (Author: singhashish):
After elaborate trials of various available tools and compatibility plugins, 
below is a brief summary.

*We need to measure code coverage of following modules*
# Core (in Scala, with a little Java code)
# Clients (in Java)
Other modules do not have tests.

*Lang specific coverage tools*
# Java, [JaCoCo|http://www.eclemma.org/jacoco/] appears to be a decent tool, 
which provides line and branch coverage.
# Scala, [Scoverage|http://scoverage.org/] provides line and branch coverage.

*Coverage summary*
[SonarQube|http://www.sonarqube.org/] is a widely used tool that provides the 
capability to merge compatibility reports form various modules and present an 
overall report. Sonar uses plugins to parse and understand coverage report of 
an underlying sub-module of a project. A project can have sub-modules with 
different coverage tools, i.e., in different languages. We need following 
plugins for Kafka.
# Sonar-Jacoco (v2.1)
# Sonar-scoverage-plugin

*Issues*
# Sonar-socverage-plugin depends on 
[scalac-scoverage-plugin|https://github.com/scoverage/scalac-scoverage-plugin]. 
scalac-scoverage-plugin can be used in a gradle project using 
[gradle-scoverage|https://github.com/scoverage/gradle-scoverage]. 
gradle-scoverage,as of now, only publishes html and cobertura report. However, 
scalac-scoverage-plugin needs scoverage report to be able to parse it.
In short, sonar can not report coverage for scala project as of now. A full 
coverage report does get generated for scala project, but it would not show up 
in overall report. I have discussed this with the collaborators of 
gradle-scoverage and they are working on it.
# Scala 2.10 is not supported by scalac-scoverage-plugin, [detailed 
discussion|https://github.com/scoverage/scalac-scoverage-plugin/blob/master/2.10.md].

*OK, so where do we stand*
We can generate coverage reports, with line and branch coverage included, for 
core and clients sub modules.
We can generate a sonar summary report for the project, but that will only 
include coverage of clients sub module.
Coverage report, web report, for core module will have to be browsed separately.
As soon as gradle-scoverage start publishing scoverage report, we can see 
core's coverage as well in the sonar summary report.

If this sounds 

[jira] [Commented] (KAFKA-1722) static analysis code coverage for pci audit needs

2015-01-18 Thread Ashish Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14281984#comment-14281984
 ] 

Ashish Kumar Singh commented on KAFKA-1722:
---

If we are open to use something like that, then I guess 
[coverall|https://coveralls.io/] is a better option.

 static analysis code coverage for pci audit needs
 -

 Key: KAFKA-1722
 URL: https://issues.apache.org/jira/browse/KAFKA-1722
 Project: Kafka
  Issue Type: Bug
  Components: security
Reporter: Joe Stein
Assignee: Ashish Kumar Singh
 Fix For: 0.9.0


 Code coverage is a measure used to describe the degree to which the source 
 code of a product is tested. A product with high code coverage has been more 
 thoroughly tested and has a lower chance of containing software bugs than a 
 product with low code coverage. Apart from PCI audit needs, increasing user 
 base of Kafka makes it important to increase code coverage of Kafka. 
 Something just can not be improved without being measured.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-1722) static analysis code coverage for pci audit needs

2015-01-18 Thread Ashish Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14281984#comment-14281984
 ] 

Ashish Kumar Singh edited comment on KAFKA-1722 at 1/18/15 10:25 PM:
-

If we are open to use something like that, then I guess 
[coverall|https://coveralls.io/] is a better option. But again, the concern is 
getting scala coverage. I think the best option we have as of now is to go 
ahead with what I suggested before. I am sure soon gradle-scoverage will start 
publishing scoverage report.


was (Author: singhashish):
If we are open to use something like that, then I guess 
[coverall|https://coveralls.io/] is a better option.

 static analysis code coverage for pci audit needs
 -

 Key: KAFKA-1722
 URL: https://issues.apache.org/jira/browse/KAFKA-1722
 Project: Kafka
  Issue Type: Bug
  Components: security
Reporter: Joe Stein
Assignee: Ashish Kumar Singh
 Fix For: 0.9.0


 Code coverage is a measure used to describe the degree to which the source 
 code of a product is tested. A product with high code coverage has been more 
 thoroughly tested and has a lower chance of containing software bugs than a 
 product with low code coverage. Apart from PCI audit needs, increasing user 
 base of Kafka makes it important to increase code coverage of Kafka. 
 Something just can not be improved without being measured.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1722) static analysis code coverage for pci audit needs

2015-01-18 Thread Don Bosco Durai (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14281992#comment-14281992
 ] 

Don Bosco Durai commented on KAFKA-1722:


coverall also seems to be good. It says on it's website that it supports scala. 
Not sure to what level.

Have you thought about automating the build and submission? Coverity can be be 
integrated with Travis CI, so it is easy to schedule the build and have results 
shared with everyone.


 static analysis code coverage for pci audit needs
 -

 Key: KAFKA-1722
 URL: https://issues.apache.org/jira/browse/KAFKA-1722
 Project: Kafka
  Issue Type: Bug
  Components: security
Reporter: Joe Stein
Assignee: Ashish Kumar Singh
 Fix For: 0.9.0


 Code coverage is a measure used to describe the degree to which the source 
 code of a product is tested. A product with high code coverage has been more 
 thoroughly tested and has a lower chance of containing software bugs than a 
 product with low code coverage. Apart from PCI audit needs, increasing user 
 base of Kafka makes it important to increase code coverage of Kafka. 
 Something just can not be improved without being measured.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1722) static analysis code coverage for pci audit needs

2015-01-18 Thread Ashish Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Kumar Singh updated KAFKA-1722:
--
Description: Code coverage is a measure used to describe the degree to 
which the source code of a product is tested. A product with high code coverage 
has been more thoroughly tested and has a lower chance of containing software 
bugs than a product with low code coverage. Apart from PCI audit needs, 
increasing user base of Kafka makes it important to increase code coverage of 
Kafka. Something just can not be improved without being measured.

 static analysis code coverage for pci audit needs
 -

 Key: KAFKA-1722
 URL: https://issues.apache.org/jira/browse/KAFKA-1722
 Project: Kafka
  Issue Type: Bug
  Components: security
Reporter: Joe Stein
Assignee: Ashish Kumar Singh
 Fix For: 0.9.0


 Code coverage is a measure used to describe the degree to which the source 
 code of a product is tested. A product with high code coverage has been more 
 thoroughly tested and has a lower chance of containing software bugs than a 
 product with low code coverage. Apart from PCI audit needs, increasing user 
 base of Kafka makes it important to increase code coverage of Kafka. 
 Something just can not be improved without being measured.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)