[jira] [Created] (KAFKA-2324) Update to Scala 2.11.7

2015-07-08 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-2324:
--

 Summary: Update to Scala 2.11.7
 Key: KAFKA-2324
 URL: https://issues.apache.org/jira/browse/KAFKA-2324
 Project: Kafka
  Issue Type: Improvement
Reporter: Ismael Juma
Assignee: Ismael Juma
Priority: Minor


There are a number of fixes and improvements in the Scala 2.11.7 release, which 
is backwards and forwards compatible with 2.11.6:

http://www.scala-lang.org/news/2.11.7



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2325) ConsumerCoordinatorResponseTest doesn't compile with Scala 2.9.x

2015-07-08 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618490#comment-14618490
 ] 

Ismael Juma commented on KAFKA-2325:


Alternatively, we could decide to drop Scala 2.9 altogether (the fact that we 
are committing code that doesn't compile and no-one has noticed for more than a 
week is not a good sign). In that case, this issue should be closed a new one 
filed. Note that there was a mailing list thread about dropping Scala 2.9 
support a few months ago:

http://search-hadoop.com/m/uyzND1uIW3k2fZVfU1/scala+2.9subj=Dropping+support+for+Scala+2+9+x

People were positive about it, but there weren't many responses.

 ConsumerCoordinatorResponseTest doesn't compile with Scala 2.9.x
 

 Key: KAFKA-2325
 URL: https://issues.apache.org/jira/browse/KAFKA-2325
 Project: Kafka
  Issue Type: Bug
Reporter: Ismael Juma

 It uses `scala.concurrent` classes that were introduced in Scala 2.10 and 
 backported to 2.9.3 (kafka supports 2.9.1 and 2.9.2 though).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2325) ConsumerCoordinatorResponseTest doesn't compile with Scala 2.9.x

2015-07-08 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-2325:
--

 Summary: ConsumerCoordinatorResponseTest doesn't compile with 
Scala 2.9.x
 Key: KAFKA-2325
 URL: https://issues.apache.org/jira/browse/KAFKA-2325
 Project: Kafka
  Issue Type: Bug
Reporter: Ismael Juma


It uses `scala.concurrent` classes that were introduced in Scala 2.10 and 
backported to 2.9.3 (kafka supports 2.9.1 and 2.9.2 though).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2323) Simplify ScalaTest dependency versions

2015-07-08 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-2323:
--

 Summary: Simplify ScalaTest dependency versions
 Key: KAFKA-2323
 URL: https://issues.apache.org/jira/browse/KAFKA-2323
 Project: Kafka
  Issue Type: Improvement
Reporter: Ismael Juma
Assignee: Ismael Juma
Priority: Minor


We currently use the following ScalaTest versions:
* 1.8 for Scala 2.9.x
* 1.9.1 for Scala 2.10.x
* 2.2.0 for Scala 2.11.x

I propose we simplify it to:
* 1.9.1 for Scala 2.9.x
* 2.2.5 for every other Scala version (currently 2.10.x and 2.11.x)

And since we will drop support for Scala 2.9.x soon, then the conditional check 
for ScalaTest can be removed altogether.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2320) Configure GitHub pull request build in Jenkins

2015-07-08 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-2320:
--

 Summary: Configure GitHub pull request build in Jenkins
 Key: KAFKA-2320
 URL: https://issues.apache.org/jira/browse/KAFKA-2320
 Project: Kafka
  Issue Type: Task
Reporter: Ismael Juma


The details are available in the following Apache Infra post:
https://blogs.apache.org/infra/entry/github_pull_request_builds_now

I paste the instructions here as well for convenience:

{quote}
Here’s what you need to do to set it up:
* Create a new job, probably copied from an existing job.
* Make sure you’re not doing any “mvn deploy” or equivalent in the new job - 
this job shouldn’t be deploying any artifacts to Nexus, etc.
* Check the Enable Git validated merge support” box - you can leave the first 
few fields set to their default, since we’re not actually pushing anything. 
This is just required to get the pull request builder to register correctly.
* Set the “GitHub project” field to the HTTP URL for your repository - 
i.e.,http://github.com/apache/incubator-brooklyn/- make sure it ends with 
that trailing slash and doesn’t include .git, etc.
* In the Git SCM section of the job configuration, set the repository URL to 
point to the GitHub git:// URL for your repository - i.e., 
git://github.com/apache/kafka.git.
* You should be able to leave the “Branches to build” field as is - this won’t 
be relevant anyway.
* Click the “Add” button in “Additional Behaviors” and choose Strategy for 
choosing what to build”. Make sure the choosing strategy is set to “Build 
commits submitted for validated merge”.
* Uncheck any existing build triggers - this shouldn’t be running on a 
schedule, polling, running when SNAPSHOT dependencies are built, etc.
* Check the “Build pull requests to the repository” option in the build 
triggers.
* Optionally change anything else in the job that you’d like to be different 
for a pull request build than for a normal build - i.e., any downstream build 
triggers should probably be removed,  you may want to change email recipients, 
etc.
* Save, and you’re done!
{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2321) Introduce CONTRIBUTING.md

2015-07-08 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-2321:
--

 Summary: Introduce CONTRIBUTING.md
 Key: KAFKA-2321
 URL: https://issues.apache.org/jira/browse/KAFKA-2321
 Project: Kafka
  Issue Type: Improvement
Reporter: Ismael Juma
Assignee: Ismael Juma


This file is displayed when people create a pull request in GitHub. It should 
link to the relevant pages in the wiki and website.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 36298: Patch for KAFKA-2321

2015-07-08 Thread Ismael Juma

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36298/
---

(Updated July 8, 2015, 8:13 a.m.)


Review request for kafka.


Bugs: KAFKA-2321
https://issues.apache.org/jira/browse/KAFKA-2321


Repository: kafka


Description
---

kafka-2321; Introduce CONTRIBUTING.md


Diffs
-

  CONTRIBUTING.md PRE-CREATION 

Diff: https://reviews.apache.org/r/36298/diff/


Testing (updated)
---

Not a code change, so simply tested that the page looks fine and the links work.


Thanks,

Ismael Juma



[jira] [Commented] (KAFKA-1559) Upgrade Gradle wrapper to Gradle 2.0

2015-07-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618264#comment-14618264
 ] 

ASF GitHub Bot commented on KAFKA-1559:
---

Github user sslavic closed the pull request at:

https://github.com/apache/kafka/pull/29


 Upgrade Gradle wrapper to Gradle 2.0
 

 Key: KAFKA-1559
 URL: https://issues.apache.org/jira/browse/KAFKA-1559
 Project: Kafka
  Issue Type: Task
  Components: build
Affects Versions: 0.8.1.1
Reporter: Stevo Slavic
Assignee: Ivan Lyutov
Priority: Trivial
  Labels: build
 Fix For: 0.8.2.0

 Attachments: KAFKA-1559.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-1372 Upgrade to Gradle 1.10

2015-07-08 Thread sslavic
Github user sslavic closed the pull request at:

https://github.com/apache/kafka/pull/23


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: Upgrade Gradle wrapper to Gradle 2.0

2015-07-08 Thread sslavic
Github user sslavic closed the pull request at:

https://github.com/apache/kafka/pull/29


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-2321) Introduce CONTRIBUTING.md

2015-07-08 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2321:
---
Status: Patch Available  (was: Open)

 Introduce CONTRIBUTING.md
 -

 Key: KAFKA-2321
 URL: https://issues.apache.org/jira/browse/KAFKA-2321
 Project: Kafka
  Issue Type: Improvement
Reporter: Ismael Juma
Assignee: Ismael Juma
 Attachments: KAFKA-2321.patch


 This file is displayed when people create a pull request in GitHub. It should 
 link to the relevant pages in the wiki and website.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2321) Introduce CONTRIBUTING.md

2015-07-08 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2321:
---
Attachment: KAFKA-2321.patch

 Introduce CONTRIBUTING.md
 -

 Key: KAFKA-2321
 URL: https://issues.apache.org/jira/browse/KAFKA-2321
 Project: Kafka
  Issue Type: Improvement
Reporter: Ismael Juma
Assignee: Ismael Juma
 Attachments: KAFKA-2321.patch


 This file is displayed when people create a pull request in GitHub. It should 
 link to the relevant pages in the wiki and website.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1372) Upgrade to Gradle 1.10

2015-07-08 Thread Stevo Slavic (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stevo Slavic updated KAFKA-1372:

Resolution: Duplicate
Status: Resolved  (was: Patch Available)

Resolving as duplicate of KAFKA-1559

 Upgrade to Gradle 1.10
 --

 Key: KAFKA-1372
 URL: https://issues.apache.org/jira/browse/KAFKA-1372
 Project: Kafka
  Issue Type: Task
  Components: tools
Affects Versions: 0.8.1
Reporter: Stevo Slavic
Priority: Minor
  Labels: gradle
 Attachments: 0001-KAFKA-1372-Upgrade-to-Gradle-1.10.patch, 
 0001-KAFKA-1372-Upgrade-to-Gradle-1.11.patch


 Currently used version of Gradle wrapper is 1.6 while 1.11 is available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] Using GitHub Pull Requests for contributions and code review

2015-07-08 Thread Ismael Juma
An update on this.

On Thu, Apr 30, 2015 at 2:12 PM, Ismael Juma ism...@juma.me.uk wrote:


1. CI builds triggered by GitHub PRs (this is supported by Apache
Infra, we need to request it for Kafka and provide whatever configuration
is needed)

 Filed https://issues.apache.org/jira/browse/KAFKA-2320, someone with a
Jenkins account needs to follow the instructions there.


1. Adapting Spark's merge_park_pr script and integrating it into the
kafka Git repository

 https://issues.apache.org/jira/browse/KAFKA-2187 includes a patch that
has received an initial review by Neha.


1. Updating the Kafka contribution wiki and adding a CONTRIBUTING.md
to the Git repository (this is shown when someone is creating a pull
request)

 Initial versions (feedback and/or improvements are welcome):

   -
   https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Code+Changes
   -
   
https://cwiki.apache.org/confluence/display/KAFKA/Patch+submission+and+review#Patchsubmissionandreview-MergingGitHubPullRequests
   - https://issues.apache.org/jira/browse/KAFKA-2321 (patch available)


1. Go through existing GitHub pull requests and close the ones that
are no longer relevant (there are quite a few as people have been opening
them over the years, but nothing was done about most of them)

 Not done yet. I think this should wait until we have merged a few PRs as I
would like to invite people to open new PRs if it's still relevant while
pointing them to the documentation on how to go about it.


1. Other things I may be missing

 We also need to update the Contributing page on the website. I think
this should also wait until we are happy that the new approach works well
for us.

Any help moving this forward is appreciated. Aside from reviews, feedback
and merging the changes; testing the new process is particularly useful (
https://issues.apache.org/jira/browse/KAFKA-2276 is an example).

Best,
Ismael


[jira] [Created] (KAFKA-2322) Use Java 7 features to improve code quality

2015-07-08 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-2322:
--

 Summary: Use Java 7 features to improve code quality
 Key: KAFKA-2322
 URL: https://issues.apache.org/jira/browse/KAFKA-2322
 Project: Kafka
  Issue Type: Improvement
Reporter: Ismael Juma
Assignee: Ismael Juma


Once KAFKA-2316 is merged, we should take advantage of Java 7 features that 
improve code quality (readability, safety, etc.).

Examples:
* Diamond operator
* Try with resources
* Multi-catch
* String in switch (maybe)
* Suppressed exceptions (maybe)

This issue is for simple and mechanical improvements. More complex changes  
should be considered in separate issues (using nio.2, new concurrency classes, 
etc.).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2320) Configure GitHub pull request build in Jenkins

2015-07-08 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618177#comment-14618177
 ] 

Ismael Juma commented on KAFKA-2320:


This needs to be done by someone that has permission to create new jobs in 
Jenkins.

 Configure GitHub pull request build in Jenkins
 --

 Key: KAFKA-2320
 URL: https://issues.apache.org/jira/browse/KAFKA-2320
 Project: Kafka
  Issue Type: Task
Reporter: Ismael Juma

 The details are available in the following Apache Infra post:
 https://blogs.apache.org/infra/entry/github_pull_request_builds_now
 I paste the instructions here as well for convenience:
 {quote}
 Here’s what you need to do to set it up:
 * Create a new job, probably copied from an existing job.
 * Make sure you’re not doing any “mvn deploy” or equivalent in the new job - 
 this job shouldn’t be deploying any artifacts to Nexus, etc.
 * Check the Enable Git validated merge support” box - you can leave the 
 first few fields set to their default, since we’re not actually pushing 
 anything. This is just required to get the pull request builder to register 
 correctly.
 * Set the “GitHub project” field to the HTTP URL for your repository - 
 i.e.,http://github.com/apache/incubator-brooklyn/- make sure it ends with 
 that trailing slash and doesn’t include .git, etc.
 * In the Git SCM section of the job configuration, set the repository URL to 
 point to the GitHub git:// URL for your repository - i.e., 
 git://github.com/apache/kafka.git.
 * You should be able to leave the “Branches to build” field as is - this 
 won’t be relevant anyway.
 * Click the “Add” button in “Additional Behaviors” and choose Strategy for 
 choosing what to build”. Make sure the choosing strategy is set to “Build 
 commits submitted for validated merge”.
 * Uncheck any existing build triggers - this shouldn’t be running on a 
 schedule, polling, running when SNAPSHOT dependencies are built, etc.
 * Check the “Build pull requests to the repository” option in the build 
 triggers.
 * Optionally change anything else in the job that you’d like to be different 
 for a pull request build than for a normal build - i.e., any downstream build 
 triggers should probably be removed,  you may want to change email 
 recipients, etc.
 * Save, and you’re done!
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1372) Upgrade to Gradle 1.10

2015-07-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618260#comment-14618260
 ] 

ASF GitHub Bot commented on KAFKA-1372:
---

Github user sslavic closed the pull request at:

https://github.com/apache/kafka/pull/23


 Upgrade to Gradle 1.10
 --

 Key: KAFKA-1372
 URL: https://issues.apache.org/jira/browse/KAFKA-1372
 Project: Kafka
  Issue Type: Task
  Components: tools
Affects Versions: 0.8.1
Reporter: Stevo Slavic
Priority: Minor
  Labels: gradle
 Attachments: 0001-KAFKA-1372-Upgrade-to-Gradle-1.10.patch, 
 0001-KAFKA-1372-Upgrade-to-Gradle-1.11.patch


 Currently used version of Gradle wrapper is 1.6 while 1.11 is available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 36306: Patch for KAFKA-2323

2015-07-08 Thread Ismael Juma

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36306/
---

(Updated July 8, 2015, 1:11 p.m.)


Review request for kafka.


Bugs: KAFKA-2323
https://issues.apache.org/jira/browse/KAFKA-2323


Repository: kafka


Description
---

Also remove unused `def`.


Diffs
-

  build.gradle 2b3c0099bf7013553582a3e6ef74abe267f3546e 

Diff: https://reviews.apache.org/r/36306/diff/


Testing (updated)
---

Tests passed for Scala 2.10.5 (although `SocketServer` failed a couple of times 
before it passed) and 2.11.6. 2.9.x doesn't compile before this change, so 
tests could not be run.


Thanks,

Ismael Juma



[jira] [Commented] (KAFKA-2324) Update to Scala 2.11.7

2015-07-08 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618594#comment-14618594
 ] 

Ismael Juma commented on KAFKA-2324:


Created reviewboard https://reviews.apache.org/r/36307/diff/
 against branch upstream/trunk

 Update to Scala 2.11.7
 --

 Key: KAFKA-2324
 URL: https://issues.apache.org/jira/browse/KAFKA-2324
 Project: Kafka
  Issue Type: Improvement
Reporter: Ismael Juma
Assignee: Ismael Juma
Priority: Minor
 Attachments: KAFKA-2324.patch


 There are a number of fixes and improvements in the Scala 2.11.7 release, 
 which is backwards and forwards compatible with 2.11.6:
 http://www.scala-lang.org/news/2.11.7



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 36307: Patch for KAFKA-2324

2015-07-08 Thread Ismael Juma

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36307/
---

(Updated July 8, 2015, 1:39 p.m.)


Review request for kafka.


Bugs: KAFKA-2324
https://issues.apache.org/jira/browse/KAFKA-2324


Repository: kafka


Description
---

kafka-2324; Update to Scala 2.11.7


Diffs
-

  README.md a9a5d1ef2b74440a7a63fef02078a1f54f107b8f 
  build.gradle 2b3c0099bf7013553582a3e6ef74abe267f3546e 

Diff: https://reviews.apache.org/r/36307/diff/


Testing (updated)
---

Ran the tests. `SocketServerTest` had to be rerun before it passed.


Thanks,

Ismael Juma



[jira] [Updated] (KAFKA-2324) Update to Scala 2.11.7

2015-07-08 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2324:
---
Attachment: KAFKA-2324.patch

 Update to Scala 2.11.7
 --

 Key: KAFKA-2324
 URL: https://issues.apache.org/jira/browse/KAFKA-2324
 Project: Kafka
  Issue Type: Improvement
Reporter: Ismael Juma
Assignee: Ismael Juma
Priority: Minor
 Attachments: KAFKA-2324.patch


 There are a number of fixes and improvements in the Scala 2.11.7 release, 
 which is backwards and forwards compatible with 2.11.6:
 http://www.scala-lang.org/news/2.11.7



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 36307: Patch for KAFKA-2324

2015-07-08 Thread Ismael Juma

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36307/
---

Review request for kafka.


Bugs: KAFKA-2324
https://issues.apache.org/jira/browse/KAFKA-2324


Repository: kafka


Description
---

kafka-2324; Update to Scala 2.11.7


Diffs
-

  README.md a9a5d1ef2b74440a7a63fef02078a1f54f107b8f 
  build.gradle 2b3c0099bf7013553582a3e6ef74abe267f3546e 

Diff: https://reviews.apache.org/r/36307/diff/


Testing
---


Thanks,

Ismael Juma



[jira] [Updated] (KAFKA-2324) Update to Scala 2.11.7

2015-07-08 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2324:
---
Status: Patch Available  (was: Open)

 Update to Scala 2.11.7
 --

 Key: KAFKA-2324
 URL: https://issues.apache.org/jira/browse/KAFKA-2324
 Project: Kafka
  Issue Type: Improvement
Reporter: Ismael Juma
Assignee: Ismael Juma
Priority: Minor
 Attachments: KAFKA-2324.patch


 There are a number of fixes and improvements in the Scala 2.11.7 release, 
 which is backwards and forwards compatible with 2.11.6:
 http://www.scala-lang.org/news/2.11.7



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 36307: Patch for KAFKA-2324

2015-07-08 Thread Sriharsha Chintalapani

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36307/#review90891
---

Ship it!


Ship It!

- Sriharsha Chintalapani


On July 8, 2015, 1:39 p.m., Ismael Juma wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/36307/
 ---
 
 (Updated July 8, 2015, 1:39 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-2324
 https://issues.apache.org/jira/browse/KAFKA-2324
 
 
 Repository: kafka
 
 
 Description
 ---
 
 kafka-2324; Update to Scala 2.11.7
 
 
 Diffs
 -
 
   README.md a9a5d1ef2b74440a7a63fef02078a1f54f107b8f 
   build.gradle 2b3c0099bf7013553582a3e6ef74abe267f3546e 
 
 Diff: https://reviews.apache.org/r/36307/diff/
 
 
 Testing
 ---
 
 Ran the tests. `SocketServerTest` had to be rerun before it passed.
 
 
 Thanks,
 
 Ismael Juma
 




[jira] [Commented] (KAFKA-2325) ConsumerCoordinatorResponseTest doesn't compile with Scala 2.9.x

2015-07-08 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618609#comment-14618609
 ] 

Sriharsha Chintalapani commented on KAFKA-2325:
---

[~ijuma] lets' reopen the thread for dropping support for scala 2.9.x .I think 
at this point its better to be dropped if there aren't any users.

 ConsumerCoordinatorResponseTest doesn't compile with Scala 2.9.x
 

 Key: KAFKA-2325
 URL: https://issues.apache.org/jira/browse/KAFKA-2325
 Project: Kafka
  Issue Type: Bug
Reporter: Ismael Juma

 It uses `scala.concurrent` classes that were introduced in Scala 2.10 and 
 backported to 2.9.3 (kafka supports 2.9.1 and 2.9.2 though).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 36306: Patch for KAFKA-2323

2015-07-08 Thread Ismael Juma

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36306/
---

Review request for kafka.


Bugs: KAFKA-2323
https://issues.apache.org/jira/browse/KAFKA-2323


Repository: kafka


Description
---

Also remove unused `def`.


Diffs
-

  build.gradle 2b3c0099bf7013553582a3e6ef74abe267f3546e 

Diff: https://reviews.apache.org/r/36306/diff/


Testing
---


Thanks,

Ismael Juma



[jira] [Commented] (KAFKA-2323) Simplify ScalaTest dependency versions

2015-07-08 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618550#comment-14618550
 ] 

Ismael Juma commented on KAFKA-2323:


Created reviewboard https://reviews.apache.org/r/36306/diff/
 against branch upstream/trunk

 Simplify ScalaTest dependency versions
 --

 Key: KAFKA-2323
 URL: https://issues.apache.org/jira/browse/KAFKA-2323
 Project: Kafka
  Issue Type: Improvement
Reporter: Ismael Juma
Assignee: Ismael Juma
Priority: Minor
 Attachments: KAFKA-2323.patch


 We currently use the following ScalaTest versions:
 * 1.8 for Scala 2.9.x
 * 1.9.1 for Scala 2.10.x
 * 2.2.0 for Scala 2.11.x
 I propose we simplify it to:
 * 1.9.1 for Scala 2.9.x
 * 2.2.5 for every other Scala version (currently 2.10.x and 2.11.x)
 And since we will drop support for Scala 2.9.x soon, then the conditional 
 check for ScalaTest can be removed altogether.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2323) Simplify ScalaTest dependency versions

2015-07-08 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2323:
---
Attachment: KAFKA-2323.patch

 Simplify ScalaTest dependency versions
 --

 Key: KAFKA-2323
 URL: https://issues.apache.org/jira/browse/KAFKA-2323
 Project: Kafka
  Issue Type: Improvement
Reporter: Ismael Juma
Assignee: Ismael Juma
Priority: Minor
 Attachments: KAFKA-2323.patch


 We currently use the following ScalaTest versions:
 * 1.8 for Scala 2.9.x
 * 1.9.1 for Scala 2.10.x
 * 2.2.0 for Scala 2.11.x
 I propose we simplify it to:
 * 1.9.1 for Scala 2.9.x
 * 2.2.5 for every other Scala version (currently 2.10.x and 2.11.x)
 And since we will drop support for Scala 2.9.x soon, then the conditional 
 check for ScalaTest can be removed altogether.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2323) Simplify ScalaTest dependency versions

2015-07-08 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2323:
---
Status: Patch Available  (was: Open)

 Simplify ScalaTest dependency versions
 --

 Key: KAFKA-2323
 URL: https://issues.apache.org/jira/browse/KAFKA-2323
 Project: Kafka
  Issue Type: Improvement
Reporter: Ismael Juma
Assignee: Ismael Juma
Priority: Minor
 Attachments: KAFKA-2323.patch


 We currently use the following ScalaTest versions:
 * 1.8 for Scala 2.9.x
 * 1.9.1 for Scala 2.10.x
 * 2.2.0 for Scala 2.11.x
 I propose we simplify it to:
 * 1.9.1 for Scala 2.9.x
 * 2.2.5 for every other Scala version (currently 2.10.x and 2.11.x)
 And since we will drop support for Scala 2.9.x soon, then the conditional 
 check for ScalaTest can be removed altogether.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Dropping support for Scala 2.9.x

2015-07-08 Thread Ismael Juma
Hi,

The responses in this thread were positive, but there weren't many. A few
months passed and Sriharsha encouraged me to reopen the thread given that
the 2.9 build has been broken for at least a week[1] and no-one seemed to
notice.

Do we want to invest more time so that the 2.9 build continues to work or
do we want to focus our efforts on 2.10 and 2.11? Please share your opinion.

Best,
Ismael

[1] https://issues.apache.org/jira/browse/KAFKA-2325

On Fri, Mar 27, 2015 at 2:20 PM, Ismael Juma mli...@juma.me.uk wrote:

 Hi all,

 The Kafka build currently includes support for Scala 2.9, which means that
 it cannot take advantage of features introduced in Scala 2.10 or depend on
 libraries that require it.

 This restricts the solutions available while trying to solve existing
 issues. I was browsing JIRA looking for areas to contribute and I quickly
 ran into two issues where this is the case:

 * KAFKA-1351: String.format is very expensive in Scala could be solved
 nicely by using the String interpolation feature introduced in Scala 2.10.

 * KAFKA-1595: Remove deprecated and slower scala JSON parser from
 kafka.consumer.TopicCount could be solved by using an existing JSON
 library, but both jackson-scala and play-json require 2.10 (argonaut
 supports Scala 2.9, but it brings other dependencies like scalaz). We can
 workaround this by writing our own code instead of using libraries, of
 course, but it's not ideal.

 Other features like Scala Futures and value classes would also be useful
 in some situations, I would think (for a more extensive list of new
 features, see
 http://scala-language.1934581.n4.nabble.com/Scala-2-10-0-now-available-td4634126.html
 ).

 Another pain point of supporting 2.9.x is that it doubles the number of
 build and test configurations required from 2 to 4 (because the 2.9.x
 series was not necessarily binary compatible).

 A strong argument for maintaining support for 2.9.x was the client
 library, but that has been rewritten in Java.

 It's also worth mentioning that Scala 2.9.1 was released in August 2011
 (more than 3.5 years ago) and the 2.9.x series hasn't received updates of
 any sort since early 2013. Scala 2.10.0, in turn, was released in January
 2013 (over 2 years ago) and 2.10.5, the last planned release in the 2.10.x
 series, has been recently released (so even 2.10.x won't be receiving
 updates any longer).

 All in all, I think it would not be unreasonable to drop support for Scala
 2.9.x in a future release, but I may be missing something. What do others
 think?

 Ismael



Re: Dropping support for Scala 2.9.x

2015-07-08 Thread Grant Henke
+1 for dropping 2.9

On Wed, Jul 8, 2015 at 9:15 AM, Sriharsha Chintalapani ka...@harsha.io
wrote:

 I am +1 on dropping 2.9.x support.

 Thanks,
 Harsha


 On July 8, 2015 at 7:08:12 AM, Ismael Juma (mli...@juma.me.uk) wrote:

 Hi,

 The responses in this thread were positive, but there weren't many. A few
 months passed and Sriharsha encouraged me to reopen the thread given that
 the 2.9 build has been broken for at least a week[1] and no-one seemed to
 notice.

 Do we want to invest more time so that the 2.9 build continues to work or
 do we want to focus our efforts on 2.10 and 2.11? Please share your
 opinion.

 Best,
 Ismael

 [1] https://issues.apache.org/jira/browse/KAFKA-2325

 On Fri, Mar 27, 2015 at 2:20 PM, Ismael Juma mli...@juma.me.uk wrote:

  Hi all,
 
  The Kafka build currently includes support for Scala 2.9, which means
 that
  it cannot take advantage of features introduced in Scala 2.10 or depend
 on
  libraries that require it.
 
  This restricts the solutions available while trying to solve existing
  issues. I was browsing JIRA looking for areas to contribute and I quickly
  ran into two issues where this is the case:
 
  * KAFKA-1351: String.format is very expensive in Scala could be solved
  nicely by using the String interpolation feature introduced in Scala
 2.10.
 
  * KAFKA-1595: Remove deprecated and slower scala JSON parser from
  kafka.consumer.TopicCount could be solved by using an existing JSON
  library, but both jackson-scala and play-json require 2.10 (argonaut
  supports Scala 2.9, but it brings other dependencies like scalaz). We can
  workaround this by writing our own code instead of using libraries, of
  course, but it's not ideal.
 
  Other features like Scala Futures and value classes would also be useful
  in some situations, I would think (for a more extensive list of new
  features, see
 
 http://scala-language.1934581.n4.nabble.com/Scala-2-10-0-now-available-td4634126.html
  ).
 
  Another pain point of supporting 2.9.x is that it doubles the number of
  build and test configurations required from 2 to 4 (because the 2.9.x
  series was not necessarily binary compatible).
 
  A strong argument for maintaining support for 2.9.x was the client
  library, but that has been rewritten in Java.
 
  It's also worth mentioning that Scala 2.9.1 was released in August 2011
  (more than 3.5 years ago) and the 2.9.x series hasn't received updates of
  any sort since early 2013. Scala 2.10.0, in turn, was released in January
  2013 (over 2 years ago) and 2.10.5, the last planned release in the
 2.10.x
  series, has been recently released (so even 2.10.x won't be receiving
  updates any longer).
 
  All in all, I think it would not be unreasonable to drop support for
 Scala
  2.9.x in a future release, but I may be missing something. What do others
  think?
 
  Ismael
 




-- 
Grant Henke
Solutions Consultant | Cloudera
ghe...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke


Re: Dropping support for Scala 2.9.x

2015-07-08 Thread Sriharsha Chintalapani
I am +1 on dropping 2.9.x support.

Thanks, 
Harsha


On July 8, 2015 at 7:08:12 AM, Ismael Juma (mli...@juma.me.uk) wrote:

Hi,  

The responses in this thread were positive, but there weren't many. A few  
months passed and Sriharsha encouraged me to reopen the thread given that  
the 2.9 build has been broken for at least a week[1] and no-one seemed to  
notice.  

Do we want to invest more time so that the 2.9 build continues to work or  
do we want to focus our efforts on 2.10 and 2.11? Please share your opinion.  

Best,  
Ismael  

[1] https://issues.apache.org/jira/browse/KAFKA-2325  

On Fri, Mar 27, 2015 at 2:20 PM, Ismael Juma mli...@juma.me.uk wrote:  

 Hi all,  
  
 The Kafka build currently includes support for Scala 2.9, which means that  
 it cannot take advantage of features introduced in Scala 2.10 or depend on  
 libraries that require it.  
  
 This restricts the solutions available while trying to solve existing  
 issues. I was browsing JIRA looking for areas to contribute and I quickly  
 ran into two issues where this is the case:  
  
 * KAFKA-1351: String.format is very expensive in Scala could be solved  
 nicely by using the String interpolation feature introduced in Scala 2.10.  
  
 * KAFKA-1595: Remove deprecated and slower scala JSON parser from  
 kafka.consumer.TopicCount could be solved by using an existing JSON  
 library, but both jackson-scala and play-json require 2.10 (argonaut  
 supports Scala 2.9, but it brings other dependencies like scalaz). We can  
 workaround this by writing our own code instead of using libraries, of  
 course, but it's not ideal.  
  
 Other features like Scala Futures and value classes would also be useful  
 in some situations, I would think (for a more extensive list of new  
 features, see  
 http://scala-language.1934581.n4.nabble.com/Scala-2-10-0-now-available-td4634126.html
   
 ).  
  
 Another pain point of supporting 2.9.x is that it doubles the number of  
 build and test configurations required from 2 to 4 (because the 2.9.x  
 series was not necessarily binary compatible).  
  
 A strong argument for maintaining support for 2.9.x was the client  
 library, but that has been rewritten in Java.  
  
 It's also worth mentioning that Scala 2.9.1 was released in August 2011  
 (more than 3.5 years ago) and the 2.9.x series hasn't received updates of  
 any sort since early 2013. Scala 2.10.0, in turn, was released in January  
 2013 (over 2 years ago) and 2.10.5, the last planned release in the 2.10.x  
 series, has been recently released (so even 2.10.x won't be receiving  
 updates any longer).  
  
 All in all, I think it would not be unreasonable to drop support for Scala  
 2.9.x in a future release, but I may be missing something. What do others  
 think?  
  
 Ismael  
  


[jira] [Commented] (KAFKA-2308) New producer + Snappy face un-compression errors after broker restart

2015-07-08 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618873#comment-14618873
 ] 

Gwen Shapira commented on KAFKA-2308:
-

yes, I saw the Snappy test case too :)

Since its a confirmed Snappy bug, I don't think we need a Kafka test-case. We 
can just protect that call, right?

 New producer + Snappy face un-compression errors after broker restart
 -

 Key: KAFKA-2308
 URL: https://issues.apache.org/jira/browse/KAFKA-2308
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira
Assignee: Gwen Shapira
 Attachments: KAFKA-2308.patch


 Looks like the new producer, when used with Snappy, following a broker 
 restart is sending messages the brokers can't decompress. This issue was 
 discussed at few mailing lists thread, but I don't think we ever resolved it.
 I can reproduce with trunk and Snappy 1.1.1.7. 
 To reproduce:
 1. Start 3 brokers
 2. Create a topic with 3 partitions and 3 replicas each.
 2. Start performance producer with --new-producer --compression-codec 2 (and 
 set the number of messages to fairly high, to give you time. I went with 10M)
 3. Bounce one of the brokers
 4. The log of one of the surviving nodes should contain errors like:
 {code}
 2015-07-02 13:45:59,300 ERROR kafka.server.ReplicaManager: [Replica Manager 
 on Broker 66]: Error processing append operation on partition [t3,0]
 kafka.common.KafkaException:
 at 
 kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:94)
 at 
 kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:64)
 at 
 kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66)
 at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)
 at 
 kafka.message.ByteBufferMessageSet$$anon$2.innerDone(ByteBufferMessageSet.scala:177)
 at 
 kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:218)
 at 
 kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:173)
 at 
 kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66)
 at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)
 at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
 at scala.collection.Iterator$class.foreach(Iterator.scala:727)
 at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
 at 
 scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
 at 
 scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
 at 
 scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
 at 
 scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
 at scala.collection.AbstractIterator.to(Iterator.scala:1157)
 at 
 scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
 at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
 at 
 kafka.message.ByteBufferMessageSet.validateMessagesAndAssignOffsets(ByteBufferMessageSet.scala:267)
 at kafka.log.Log.liftedTree1$1(Log.scala:327)
 at kafka.log.Log.append(Log.scala:326)
 at 
 kafka.cluster.Partition$$anonfun$appendMessagesToLeader$1.apply(Partition.scala:423)
 at 
 kafka.cluster.Partition$$anonfun$appendMessagesToLeader$1.apply(Partition.scala:409)
 at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
 at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:268)
 at kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:409)
 at 
 kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:365)
 at 
 kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:350)
 at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
 at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
 at 
 scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
 at 
 scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
 at 
 scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
 at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
 at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
 at 
 scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
 at scala.collection.AbstractTraversable.map(Traversable.scala:105)
 at 
 kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:350)
 at 
 

[jira] [Commented] (KAFKA-2298) Client Selector can drop connections on InvalidReceiveException without notifying NetworkClient

2015-07-08 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618803#comment-14618803
 ] 

Sriharsha Chintalapani commented on KAFKA-2298:
---

[~lindong] [~jjkoshy] I've lot of changes in Selector for KAFKA-1690 and 
others. If you don't mind can I send this as part of the KAFKA-1690 changes 
saves me merge issues :).

 Client Selector can drop connections on InvalidReceiveException without 
 notifying NetworkClient
 ---

 Key: KAFKA-2298
 URL: https://issues.apache.org/jira/browse/KAFKA-2298
 Project: Kafka
  Issue Type: Bug
Reporter: Dong Lin
Assignee: Dong Lin
  Labels: quotas
 Attachments: KAFKA-2298.patch, KAFKA-2298_2015-06-23_18:47:54.patch, 
 KAFKA-2298_2015-06-24_13:00:39.patch


 I run into the problem described in KAFKA-2266 when testing quota. I was told 
 the bug was fixed in KAFKA-2266 after I figured out the problem.
 But the patch provided in KAFKA-2266 probably doesn't solve all related 
 problems. From reading the code there is still one edge case where the client 
 selector can close connection in poll() without notifying NetworkClient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2308) New producer + Snappy face un-compression errors after broker restart

2015-07-08 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618867#comment-14618867
 ] 

Ewen Cheslack-Postava commented on KAFKA-2308:
--

[~gwenshap] The test case you gave doesn't quite do enough to trigger the bug. 
It releases the same buffer twice, but doesn't reuse it. I think you'd need to 
get the test to do something more like:

* Fill first record batch (batch 1) with records and drain (causing buffer to 
be released).
* At least start creating another batch (batch 2). This allocates the buffer to 
that batch.
* Reenqueue batch 1 and drain (causing buffer to be released second time).
* Continue enqueuing until it creates *another* batch (batch 3), which 
allocates the buffer yet again.
* Drain batches 2 and 3 and validate their contents.

 New producer + Snappy face un-compression errors after broker restart
 -

 Key: KAFKA-2308
 URL: https://issues.apache.org/jira/browse/KAFKA-2308
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira
Assignee: Gwen Shapira
 Attachments: KAFKA-2308.patch


 Looks like the new producer, when used with Snappy, following a broker 
 restart is sending messages the brokers can't decompress. This issue was 
 discussed at few mailing lists thread, but I don't think we ever resolved it.
 I can reproduce with trunk and Snappy 1.1.1.7. 
 To reproduce:
 1. Start 3 brokers
 2. Create a topic with 3 partitions and 3 replicas each.
 2. Start performance producer with --new-producer --compression-codec 2 (and 
 set the number of messages to fairly high, to give you time. I went with 10M)
 3. Bounce one of the brokers
 4. The log of one of the surviving nodes should contain errors like:
 {code}
 2015-07-02 13:45:59,300 ERROR kafka.server.ReplicaManager: [Replica Manager 
 on Broker 66]: Error processing append operation on partition [t3,0]
 kafka.common.KafkaException:
 at 
 kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:94)
 at 
 kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:64)
 at 
 kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66)
 at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)
 at 
 kafka.message.ByteBufferMessageSet$$anon$2.innerDone(ByteBufferMessageSet.scala:177)
 at 
 kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:218)
 at 
 kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:173)
 at 
 kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66)
 at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)
 at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
 at scala.collection.Iterator$class.foreach(Iterator.scala:727)
 at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
 at 
 scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
 at 
 scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
 at 
 scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
 at 
 scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
 at scala.collection.AbstractIterator.to(Iterator.scala:1157)
 at 
 scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
 at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
 at 
 kafka.message.ByteBufferMessageSet.validateMessagesAndAssignOffsets(ByteBufferMessageSet.scala:267)
 at kafka.log.Log.liftedTree1$1(Log.scala:327)
 at kafka.log.Log.append(Log.scala:326)
 at 
 kafka.cluster.Partition$$anonfun$appendMessagesToLeader$1.apply(Partition.scala:423)
 at 
 kafka.cluster.Partition$$anonfun$appendMessagesToLeader$1.apply(Partition.scala:409)
 at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
 at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:268)
 at kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:409)
 at 
 kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:365)
 at 
 kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:350)
 at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
 at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
 at 
 scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
 at 
 scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
 at 
 

Review Request 36298: Patch for KAFKA-2321

2015-07-08 Thread Ismael Juma

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36298/
---

Review request for kafka.


Bugs: KAFKA-2321
https://issues.apache.org/jira/browse/KAFKA-2321


Repository: kafka


Description
---

kafka-2321; Introduce CONTRIBUTING.md


Diffs
-

  CONTRIBUTING.md PRE-CREATION 

Diff: https://reviews.apache.org/r/36298/diff/


Testing
---


Thanks,

Ismael Juma



[jira] [Commented] (KAFKA-2321) Introduce CONTRIBUTING.md

2015-07-08 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618182#comment-14618182
 ] 

Ismael Juma commented on KAFKA-2321:


Created reviewboard https://reviews.apache.org/r/36298/diff/
 against branch upstream/trunk

 Introduce CONTRIBUTING.md
 -

 Key: KAFKA-2321
 URL: https://issues.apache.org/jira/browse/KAFKA-2321
 Project: Kafka
  Issue Type: Improvement
Reporter: Ismael Juma
Assignee: Ismael Juma
 Attachments: KAFKA-2321.patch


 This file is displayed when people create a pull request in GitHub. It should 
 link to the relevant pages in the wiki and website.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2321) Introduce CONTRIBUTING.md

2015-07-08 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618193#comment-14618193
 ] 

Ismael Juma commented on KAFKA-2321:


This should only be merged after KAFKA-2187 is merged.

 Introduce CONTRIBUTING.md
 -

 Key: KAFKA-2321
 URL: https://issues.apache.org/jira/browse/KAFKA-2321
 Project: Kafka
  Issue Type: Improvement
Reporter: Ismael Juma
Assignee: Ismael Juma
 Attachments: KAFKA-2321.patch


 This file is displayed when people create a pull request in GitHub. It should 
 link to the relevant pages in the wiki and website.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2316) Drop java 1.6 support

2015-07-08 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14619093#comment-14619093
 ] 

Gwen Shapira commented on KAFKA-2316:
-

ouch, yeah. I guess Jenkins is running with Java 6 still.

I don't have access to modify our build job, lets see if someone in the PMC can 
help:
[~joestein] [~junrao] [~guozhang] [~jjkoshy] ?

 Drop java 1.6 support
 -

 Key: KAFKA-2316
 URL: https://issues.apache.org/jira/browse/KAFKA-2316
 Project: Kafka
  Issue Type: Bug
Reporter: Sriharsha Chintalapani
Assignee: Sriharsha Chintalapani
 Fix For: 0.8.3

 Attachments: KAFKA-2316.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2316) Drop java 1.6 support

2015-07-08 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14619092#comment-14619092
 ] 

Ismael Juma commented on KAFKA-2316:


I don't have access to the config, which may be more enlightening. Still, I did 
notice that the following is using an old version of Scala (it should use 
2.10.5 instead):

`./gradlew -PscalaVersion=2.10.1 test`

Seems unrelated to the problem at hand though.

 Drop java 1.6 support
 -

 Key: KAFKA-2316
 URL: https://issues.apache.org/jira/browse/KAFKA-2316
 Project: Kafka
  Issue Type: Bug
Reporter: Sriharsha Chintalapani
Assignee: Sriharsha Chintalapani
 Fix For: 0.8.3

 Attachments: KAFKA-2316.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2316) Drop java 1.6 support

2015-07-08 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14619143#comment-14619143
 ] 

Gwen Shapira commented on KAFKA-2316:
-

Apache Member [~jarcec] volunteered to help out! 
Please go ahead and upgrade our Java, Jarcec. 

 Drop java 1.6 support
 -

 Key: KAFKA-2316
 URL: https://issues.apache.org/jira/browse/KAFKA-2316
 Project: Kafka
  Issue Type: Bug
Reporter: Sriharsha Chintalapani
Assignee: Sriharsha Chintalapani
 Fix For: 0.8.3

 Attachments: KAFKA-2316.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2316) Drop java 1.6 support

2015-07-08 Thread Jarek Jarcec Cecho (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14619146#comment-14619146
 ] 

Jarek Jarcec Cecho commented on KAFKA-2316:
---

I've configured the job to use Java JDK 7 u51. Let me know if you will need 
further tweaks to that job.

 Drop java 1.6 support
 -

 Key: KAFKA-2316
 URL: https://issues.apache.org/jira/browse/KAFKA-2316
 Project: Kafka
  Issue Type: Bug
Reporter: Sriharsha Chintalapani
Assignee: Sriharsha Chintalapani
 Fix For: 0.8.3

 Attachments: KAFKA-2316.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Kafka-trunk #535

2015-07-08 Thread Apache Jenkins Server
See https://builds.apache.org/job/Kafka-trunk/535/

--
[...truncated 444 lines...]
org.apache.kafka.common.record.RecordTest  testFields[48] PASSED

org.apache.kafka.common.record.RecordTest  testChecksum[48] PASSED

org.apache.kafka.common.record.RecordTest  testEquality[48] PASSED

org.apache.kafka.common.record.RecordTest  testFields[49] PASSED

org.apache.kafka.common.record.RecordTest  testChecksum[49] PASSED

org.apache.kafka.common.record.RecordTest  testEquality[49] PASSED

org.apache.kafka.common.record.RecordTest  testFields[50] PASSED

org.apache.kafka.common.record.RecordTest  testChecksum[50] PASSED

org.apache.kafka.common.record.RecordTest  testEquality[50] PASSED

org.apache.kafka.common.record.RecordTest  testFields[51] PASSED

org.apache.kafka.common.record.RecordTest  testChecksum[51] PASSED

org.apache.kafka.common.record.RecordTest  testEquality[51] PASSED

org.apache.kafka.common.record.RecordTest  testFields[52] PASSED

org.apache.kafka.common.record.RecordTest  testChecksum[52] PASSED

org.apache.kafka.common.record.RecordTest  testEquality[52] PASSED

org.apache.kafka.common.record.RecordTest  testFields[53] PASSED

org.apache.kafka.common.record.RecordTest  testChecksum[53] PASSED

org.apache.kafka.common.record.RecordTest  testEquality[53] PASSED

org.apache.kafka.common.record.RecordTest  testFields[54] PASSED

org.apache.kafka.common.record.RecordTest  testChecksum[54] PASSED

org.apache.kafka.common.record.RecordTest  testEquality[54] PASSED

org.apache.kafka.common.record.RecordTest  testFields[55] PASSED

org.apache.kafka.common.record.RecordTest  testChecksum[55] PASSED

org.apache.kafka.common.record.RecordTest  testEquality[55] PASSED

org.apache.kafka.common.record.RecordTest  testFields[56] PASSED

org.apache.kafka.common.record.RecordTest  testChecksum[56] PASSED

org.apache.kafka.common.record.RecordTest  testEquality[56] PASSED

org.apache.kafka.common.record.RecordTest  testFields[57] PASSED

org.apache.kafka.common.record.RecordTest  testChecksum[57] PASSED

org.apache.kafka.common.record.RecordTest  testEquality[57] PASSED

org.apache.kafka.common.record.RecordTest  testFields[58] PASSED

org.apache.kafka.common.record.RecordTest  testChecksum[58] PASSED

org.apache.kafka.common.record.RecordTest  testEquality[58] PASSED

org.apache.kafka.common.record.RecordTest  testFields[59] PASSED

org.apache.kafka.common.record.RecordTest  testChecksum[59] PASSED

org.apache.kafka.common.record.RecordTest  testEquality[59] PASSED

org.apache.kafka.common.record.RecordTest  testFields[60] PASSED

org.apache.kafka.common.record.RecordTest  testChecksum[60] PASSED

org.apache.kafka.common.record.RecordTest  testEquality[60] PASSED

org.apache.kafka.common.record.RecordTest  testFields[61] PASSED

org.apache.kafka.common.record.RecordTest  testChecksum[61] PASSED

org.apache.kafka.common.record.RecordTest  testEquality[61] PASSED

org.apache.kafka.common.record.RecordTest  testFields[62] PASSED

org.apache.kafka.common.record.RecordTest  testChecksum[62] PASSED

org.apache.kafka.common.record.RecordTest  testEquality[62] PASSED

org.apache.kafka.common.record.RecordTest  testFields[63] PASSED

org.apache.kafka.common.record.RecordTest  testChecksum[63] PASSED

org.apache.kafka.common.record.RecordTest  testEquality[63] PASSED

org.apache.kafka.common.record.MemoryRecordsTest  testIterator[0] PASSED

org.apache.kafka.common.record.MemoryRecordsTest  testIterator[1] PASSED

org.apache.kafka.common.record.MemoryRecordsTest  testIterator[2] PASSED

org.apache.kafka.common.record.MemoryRecordsTest  testIterator[3] PASSED

org.apache.kafka.clients.ClientUtilsTest  testParseAndValidateAddresses PASSED

org.apache.kafka.clients.ClientUtilsTest  testNoPort PASSED

org.apache.kafka.clients.NetworkClientTest  testSendToUnreadyNode PASSED

org.apache.kafka.clients.NetworkClientTest  testReadyAndDisconnect PASSED

org.apache.kafka.clients.NetworkClientTest  testSimpleRequestResponse PASSED

org.apache.kafka.clients.MetadataTest  testMetadata FAILED
java.lang.AssertionError: 
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:44)
at org.junit.Assert.assertFalse(Assert.java:69)
at org.junit.Assert.assertFalse(Assert.java:80)
at org.apache.kafka.clients.MetadataTest.tearDown(MetadataTest.java:34)

org.apache.kafka.clients.MetadataTest  testMetadataUpdateWaitTime PASSED

org.apache.kafka.clients.MetadataTest  testFailedUpdate PASSED

org.apache.kafka.clients.producer.KafkaProducerTest  testSerializerClose PASSED

org.apache.kafka.clients.producer.KafkaProducerTest  
testConstructorFailureCloseResource PASSED

org.apache.kafka.clients.producer.ProducerRecordTest  testEqualsAndHashCode 
PASSED

org.apache.kafka.clients.producer.MockProducerTest  testAutoCompleteMock PASSED

org.apache.kafka.clients.producer.MockProducerTest  testPartitioner PASSED


[jira] [Commented] (KAFKA-2308) New producer + Snappy face un-compression errors after broker restart

2015-07-08 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618921#comment-14618921
 ] 

Guozhang Wang commented on KAFKA-2308:
--

I was unit testing the patch while writing the last comment :) Just shipped it 
and committed to trunk.

 New producer + Snappy face un-compression errors after broker restart
 -

 Key: KAFKA-2308
 URL: https://issues.apache.org/jira/browse/KAFKA-2308
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira
Assignee: Gwen Shapira
 Attachments: KAFKA-2308.patch


 Looks like the new producer, when used with Snappy, following a broker 
 restart is sending messages the brokers can't decompress. This issue was 
 discussed at few mailing lists thread, but I don't think we ever resolved it.
 I can reproduce with trunk and Snappy 1.1.1.7. 
 To reproduce:
 1. Start 3 brokers
 2. Create a topic with 3 partitions and 3 replicas each.
 2. Start performance producer with --new-producer --compression-codec 2 (and 
 set the number of messages to fairly high, to give you time. I went with 10M)
 3. Bounce one of the brokers
 4. The log of one of the surviving nodes should contain errors like:
 {code}
 2015-07-02 13:45:59,300 ERROR kafka.server.ReplicaManager: [Replica Manager 
 on Broker 66]: Error processing append operation on partition [t3,0]
 kafka.common.KafkaException:
 at 
 kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:94)
 at 
 kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:64)
 at 
 kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66)
 at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)
 at 
 kafka.message.ByteBufferMessageSet$$anon$2.innerDone(ByteBufferMessageSet.scala:177)
 at 
 kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:218)
 at 
 kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:173)
 at 
 kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66)
 at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)
 at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
 at scala.collection.Iterator$class.foreach(Iterator.scala:727)
 at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
 at 
 scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
 at 
 scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
 at 
 scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
 at 
 scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
 at scala.collection.AbstractIterator.to(Iterator.scala:1157)
 at 
 scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
 at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
 at 
 kafka.message.ByteBufferMessageSet.validateMessagesAndAssignOffsets(ByteBufferMessageSet.scala:267)
 at kafka.log.Log.liftedTree1$1(Log.scala:327)
 at kafka.log.Log.append(Log.scala:326)
 at 
 kafka.cluster.Partition$$anonfun$appendMessagesToLeader$1.apply(Partition.scala:423)
 at 
 kafka.cluster.Partition$$anonfun$appendMessagesToLeader$1.apply(Partition.scala:409)
 at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
 at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:268)
 at kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:409)
 at 
 kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:365)
 at 
 kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:350)
 at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
 at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
 at 
 scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
 at 
 scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
 at 
 scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
 at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
 at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
 at 
 scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
 at scala.collection.AbstractTraversable.map(Traversable.scala:105)
 at 
 kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:350)
 at 
 kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:286)
 at 

[jira] [Commented] (KAFKA-2308) New producer + Snappy face un-compression errors after broker restart

2015-07-08 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1461#comment-1461
 ] 

Guozhang Wang commented on KAFKA-2308:
--

Agree, we do not need a test case inside Kafka code.

 New producer + Snappy face un-compression errors after broker restart
 -

 Key: KAFKA-2308
 URL: https://issues.apache.org/jira/browse/KAFKA-2308
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira
Assignee: Gwen Shapira
 Attachments: KAFKA-2308.patch


 Looks like the new producer, when used with Snappy, following a broker 
 restart is sending messages the brokers can't decompress. This issue was 
 discussed at few mailing lists thread, but I don't think we ever resolved it.
 I can reproduce with trunk and Snappy 1.1.1.7. 
 To reproduce:
 1. Start 3 brokers
 2. Create a topic with 3 partitions and 3 replicas each.
 2. Start performance producer with --new-producer --compression-codec 2 (and 
 set the number of messages to fairly high, to give you time. I went with 10M)
 3. Bounce one of the brokers
 4. The log of one of the surviving nodes should contain errors like:
 {code}
 2015-07-02 13:45:59,300 ERROR kafka.server.ReplicaManager: [Replica Manager 
 on Broker 66]: Error processing append operation on partition [t3,0]
 kafka.common.KafkaException:
 at 
 kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:94)
 at 
 kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:64)
 at 
 kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66)
 at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)
 at 
 kafka.message.ByteBufferMessageSet$$anon$2.innerDone(ByteBufferMessageSet.scala:177)
 at 
 kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:218)
 at 
 kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:173)
 at 
 kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66)
 at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)
 at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
 at scala.collection.Iterator$class.foreach(Iterator.scala:727)
 at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
 at 
 scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
 at 
 scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
 at 
 scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
 at 
 scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
 at scala.collection.AbstractIterator.to(Iterator.scala:1157)
 at 
 scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
 at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
 at 
 kafka.message.ByteBufferMessageSet.validateMessagesAndAssignOffsets(ByteBufferMessageSet.scala:267)
 at kafka.log.Log.liftedTree1$1(Log.scala:327)
 at kafka.log.Log.append(Log.scala:326)
 at 
 kafka.cluster.Partition$$anonfun$appendMessagesToLeader$1.apply(Partition.scala:423)
 at 
 kafka.cluster.Partition$$anonfun$appendMessagesToLeader$1.apply(Partition.scala:409)
 at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
 at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:268)
 at kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:409)
 at 
 kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:365)
 at 
 kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:350)
 at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
 at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
 at 
 scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
 at 
 scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
 at 
 scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
 at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
 at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
 at 
 scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
 at scala.collection.AbstractTraversable.map(Traversable.scala:105)
 at 
 kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:350)
 at 
 kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:286)
 at kafka.server.KafkaApis.handleProducerRequest(KafkaApis.scala:270)
 

Re: Review Request 36290: Patch for KAFKA-2308

2015-07-08 Thread Guozhang Wang

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36290/#review90920
---

Ship it!


Ship It!

- Guozhang Wang


On July 8, 2015, 2:47 a.m., Gwen Shapira wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/36290/
 ---
 
 (Updated July 8, 2015, 2:47 a.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-2308
 https://issues.apache.org/jira/browse/KAFKA-2308
 
 
 Repository: kafka
 
 
 Description
 ---
 
 prevent double closing of record batches in producer
 
 
 Diffs
 -
 
   clients/src/main/java/org/apache/kafka/common/record/MemoryRecords.java 
 b2db2403868b1e7361b8514cfed2e76ef785edee 
 
 Diff: https://reviews.apache.org/r/36290/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Gwen Shapira
 




Build failed in Jenkins: Kafka-trunk #534

2015-07-08 Thread Apache Jenkins Server
See https://builds.apache.org/job/Kafka-trunk/534/changes

Changes:

[cshapi] KAFKA-2316: Drop java 1.6 support; patched by Sriharsha Chintalapani 
reviewed by Ismael Juma and Gwen Shapira

--
Started by user gwenshap
Building remotely on ubuntu-6 (docker Ubuntu ubuntu) in workspace 
https://builds.apache.org/job/Kafka-trunk/ws/
  git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
  git config remote.origin.url 
  https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
  git --version # timeout=10
  git fetch --tags --progress 
  https://git-wip-us.apache.org/repos/asf/kafka.git 
  +refs/heads/*:refs/remotes/origin/*
  git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
  git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 7df39e0394a6fd2f26e8f12768ebf7fecd56e3da 
(refs/remotes/origin/trunk)
  git config core.sparsecheckout # timeout=10
  git checkout -f 7df39e0394a6fd2f26e8f12768ebf7fecd56e3da
  git rev-list 67b6b9a45ba6f096645f4f344dfdcf1df1e50dd6 # timeout=10
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
[Kafka-trunk] $ /bin/bash -xe /tmp/hudson3444826443579843907.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.1/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:downloadWrapper UP-TO-DATE

BUILD SUCCESSFUL

Total time: 12.978 secs
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
[Kafka-trunk] $ /bin/bash -xe /tmp/hudson7149393101441158307.sh
+ ./gradlew -PscalaVersion=2.10.1 test
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.1
:compileJava UP-TO-DATE
:processResources UP-TO-DATE
:classes UP-TO-DATE
:rat
Rat report: build/rat/rat-report.html
:compileTestJava UP-TO-DATE
:processTestResources UP-TO-DATE
:testClasses UP-TO-DATE
:test UP-TO-DATE
:clients:compileJava FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':clients:compileJava'.
 invalid source release: 1.7

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 13.096 secs
Build step 'Execute shell' marked build as failure
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1


[jira] [Commented] (KAFKA-2308) New producer + Snappy face un-compression errors after broker restart

2015-07-08 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618901#comment-14618901
 ] 

Gwen Shapira commented on KAFKA-2308:
-

Was that a ship it, [~guozhang]? :)

 New producer + Snappy face un-compression errors after broker restart
 -

 Key: KAFKA-2308
 URL: https://issues.apache.org/jira/browse/KAFKA-2308
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira
Assignee: Gwen Shapira
 Attachments: KAFKA-2308.patch


 Looks like the new producer, when used with Snappy, following a broker 
 restart is sending messages the brokers can't decompress. This issue was 
 discussed at few mailing lists thread, but I don't think we ever resolved it.
 I can reproduce with trunk and Snappy 1.1.1.7. 
 To reproduce:
 1. Start 3 brokers
 2. Create a topic with 3 partitions and 3 replicas each.
 2. Start performance producer with --new-producer --compression-codec 2 (and 
 set the number of messages to fairly high, to give you time. I went with 10M)
 3. Bounce one of the brokers
 4. The log of one of the surviving nodes should contain errors like:
 {code}
 2015-07-02 13:45:59,300 ERROR kafka.server.ReplicaManager: [Replica Manager 
 on Broker 66]: Error processing append operation on partition [t3,0]
 kafka.common.KafkaException:
 at 
 kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:94)
 at 
 kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:64)
 at 
 kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66)
 at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)
 at 
 kafka.message.ByteBufferMessageSet$$anon$2.innerDone(ByteBufferMessageSet.scala:177)
 at 
 kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:218)
 at 
 kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:173)
 at 
 kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66)
 at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)
 at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
 at scala.collection.Iterator$class.foreach(Iterator.scala:727)
 at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
 at 
 scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
 at 
 scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
 at 
 scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
 at 
 scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
 at scala.collection.AbstractIterator.to(Iterator.scala:1157)
 at 
 scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
 at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
 at 
 kafka.message.ByteBufferMessageSet.validateMessagesAndAssignOffsets(ByteBufferMessageSet.scala:267)
 at kafka.log.Log.liftedTree1$1(Log.scala:327)
 at kafka.log.Log.append(Log.scala:326)
 at 
 kafka.cluster.Partition$$anonfun$appendMessagesToLeader$1.apply(Partition.scala:423)
 at 
 kafka.cluster.Partition$$anonfun$appendMessagesToLeader$1.apply(Partition.scala:409)
 at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
 at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:268)
 at kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:409)
 at 
 kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:365)
 at 
 kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:350)
 at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
 at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
 at 
 scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
 at 
 scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
 at 
 scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
 at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
 at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
 at 
 scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
 at scala.collection.AbstractTraversable.map(Traversable.scala:105)
 at 
 kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:350)
 at 
 kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:286)
 at kafka.server.KafkaApis.handleProducerRequest(KafkaApis.scala:270)
 at 

Re: Dropping support for Scala 2.9.x

2015-07-08 Thread Guozhang Wang
+1.

Scala 2.9 has been 4 years old and I think it is time to drop it.

On Wed, Jul 8, 2015 at 7:22 AM, Grant Henke ghe...@cloudera.com wrote:

 +1 for dropping 2.9

 On Wed, Jul 8, 2015 at 9:15 AM, Sriharsha Chintalapani ka...@harsha.io
 wrote:

  I am +1 on dropping 2.9.x support.
 
  Thanks,
  Harsha
 
 
  On July 8, 2015 at 7:08:12 AM, Ismael Juma (mli...@juma.me.uk) wrote:
 
  Hi,
 
  The responses in this thread were positive, but there weren't many. A few
  months passed and Sriharsha encouraged me to reopen the thread given that
  the 2.9 build has been broken for at least a week[1] and no-one seemed to
  notice.
 
  Do we want to invest more time so that the 2.9 build continues to work or
  do we want to focus our efforts on 2.10 and 2.11? Please share your
  opinion.
 
  Best,
  Ismael
 
  [1] https://issues.apache.org/jira/browse/KAFKA-2325
 
  On Fri, Mar 27, 2015 at 2:20 PM, Ismael Juma mli...@juma.me.uk wrote:
 
   Hi all,
  
   The Kafka build currently includes support for Scala 2.9, which means
  that
   it cannot take advantage of features introduced in Scala 2.10 or depend
  on
   libraries that require it.
  
   This restricts the solutions available while trying to solve existing
   issues. I was browsing JIRA looking for areas to contribute and I
 quickly
   ran into two issues where this is the case:
  
   * KAFKA-1351: String.format is very expensive in Scala could be
 solved
   nicely by using the String interpolation feature introduced in Scala
  2.10.
  
   * KAFKA-1595: Remove deprecated and slower scala JSON parser from
   kafka.consumer.TopicCount could be solved by using an existing JSON
   library, but both jackson-scala and play-json require 2.10 (argonaut
   supports Scala 2.9, but it brings other dependencies like scalaz). We
 can
   workaround this by writing our own code instead of using libraries, of
   course, but it's not ideal.
  
   Other features like Scala Futures and value classes would also be
 useful
   in some situations, I would think (for a more extensive list of new
   features, see
  
 
 http://scala-language.1934581.n4.nabble.com/Scala-2-10-0-now-available-td4634126.html
   ).
  
   Another pain point of supporting 2.9.x is that it doubles the number of
   build and test configurations required from 2 to 4 (because the 2.9.x
   series was not necessarily binary compatible).
  
   A strong argument for maintaining support for 2.9.x was the client
   library, but that has been rewritten in Java.
  
   It's also worth mentioning that Scala 2.9.1 was released in August 2011
   (more than 3.5 years ago) and the 2.9.x series hasn't received updates
 of
   any sort since early 2013. Scala 2.10.0, in turn, was released in
 January
   2013 (over 2 years ago) and 2.10.5, the last planned release in the
  2.10.x
   series, has been recently released (so even 2.10.x won't be receiving
   updates any longer).
  
   All in all, I think it would not be unreasonable to drop support for
  Scala
   2.9.x in a future release, but I may be missing something. What do
 others
   think?
  
   Ismael
  
 



 --
 Grant Henke
 Solutions Consultant | Cloudera
 ghe...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke




-- 
-- Guozhang


[jira] [Updated] (KAFKA-2308) New producer + Snappy face un-compression errors after broker restart

2015-07-08 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-2308:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

 New producer + Snappy face un-compression errors after broker restart
 -

 Key: KAFKA-2308
 URL: https://issues.apache.org/jira/browse/KAFKA-2308
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira
Assignee: Gwen Shapira
 Fix For: 0.8.3

 Attachments: KAFKA-2308.patch


 Looks like the new producer, when used with Snappy, following a broker 
 restart is sending messages the brokers can't decompress. This issue was 
 discussed at few mailing lists thread, but I don't think we ever resolved it.
 I can reproduce with trunk and Snappy 1.1.1.7. 
 To reproduce:
 1. Start 3 brokers
 2. Create a topic with 3 partitions and 3 replicas each.
 2. Start performance producer with --new-producer --compression-codec 2 (and 
 set the number of messages to fairly high, to give you time. I went with 10M)
 3. Bounce one of the brokers
 4. The log of one of the surviving nodes should contain errors like:
 {code}
 2015-07-02 13:45:59,300 ERROR kafka.server.ReplicaManager: [Replica Manager 
 on Broker 66]: Error processing append operation on partition [t3,0]
 kafka.common.KafkaException:
 at 
 kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:94)
 at 
 kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:64)
 at 
 kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66)
 at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)
 at 
 kafka.message.ByteBufferMessageSet$$anon$2.innerDone(ByteBufferMessageSet.scala:177)
 at 
 kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:218)
 at 
 kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:173)
 at 
 kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66)
 at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)
 at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
 at scala.collection.Iterator$class.foreach(Iterator.scala:727)
 at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
 at 
 scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
 at 
 scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
 at 
 scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
 at 
 scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
 at scala.collection.AbstractIterator.to(Iterator.scala:1157)
 at 
 scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
 at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
 at 
 kafka.message.ByteBufferMessageSet.validateMessagesAndAssignOffsets(ByteBufferMessageSet.scala:267)
 at kafka.log.Log.liftedTree1$1(Log.scala:327)
 at kafka.log.Log.append(Log.scala:326)
 at 
 kafka.cluster.Partition$$anonfun$appendMessagesToLeader$1.apply(Partition.scala:423)
 at 
 kafka.cluster.Partition$$anonfun$appendMessagesToLeader$1.apply(Partition.scala:409)
 at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
 at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:268)
 at kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:409)
 at 
 kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:365)
 at 
 kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:350)
 at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
 at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
 at 
 scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
 at 
 scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
 at 
 scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
 at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
 at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
 at 
 scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
 at scala.collection.AbstractTraversable.map(Traversable.scala:105)
 at 
 kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:350)
 at 
 kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:286)
 at kafka.server.KafkaApis.handleProducerRequest(KafkaApis.scala:270)
 at 

[jira] [Updated] (KAFKA-2308) New producer + Snappy face un-compression errors after broker restart

2015-07-08 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-2308:
-
Fix Version/s: 0.8.3

 New producer + Snappy face un-compression errors after broker restart
 -

 Key: KAFKA-2308
 URL: https://issues.apache.org/jira/browse/KAFKA-2308
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira
Assignee: Gwen Shapira
 Fix For: 0.8.3

 Attachments: KAFKA-2308.patch


 Looks like the new producer, when used with Snappy, following a broker 
 restart is sending messages the brokers can't decompress. This issue was 
 discussed at few mailing lists thread, but I don't think we ever resolved it.
 I can reproduce with trunk and Snappy 1.1.1.7. 
 To reproduce:
 1. Start 3 brokers
 2. Create a topic with 3 partitions and 3 replicas each.
 2. Start performance producer with --new-producer --compression-codec 2 (and 
 set the number of messages to fairly high, to give you time. I went with 10M)
 3. Bounce one of the brokers
 4. The log of one of the surviving nodes should contain errors like:
 {code}
 2015-07-02 13:45:59,300 ERROR kafka.server.ReplicaManager: [Replica Manager 
 on Broker 66]: Error processing append operation on partition [t3,0]
 kafka.common.KafkaException:
 at 
 kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:94)
 at 
 kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:64)
 at 
 kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66)
 at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)
 at 
 kafka.message.ByteBufferMessageSet$$anon$2.innerDone(ByteBufferMessageSet.scala:177)
 at 
 kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:218)
 at 
 kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:173)
 at 
 kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66)
 at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)
 at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
 at scala.collection.Iterator$class.foreach(Iterator.scala:727)
 at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
 at 
 scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
 at 
 scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
 at 
 scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
 at 
 scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
 at scala.collection.AbstractIterator.to(Iterator.scala:1157)
 at 
 scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
 at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
 at 
 kafka.message.ByteBufferMessageSet.validateMessagesAndAssignOffsets(ByteBufferMessageSet.scala:267)
 at kafka.log.Log.liftedTree1$1(Log.scala:327)
 at kafka.log.Log.append(Log.scala:326)
 at 
 kafka.cluster.Partition$$anonfun$appendMessagesToLeader$1.apply(Partition.scala:423)
 at 
 kafka.cluster.Partition$$anonfun$appendMessagesToLeader$1.apply(Partition.scala:409)
 at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
 at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:268)
 at kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:409)
 at 
 kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:365)
 at 
 kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:350)
 at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
 at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
 at 
 scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
 at 
 scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
 at 
 scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
 at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
 at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
 at 
 scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
 at scala.collection.AbstractTraversable.map(Traversable.scala:105)
 at 
 kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:350)
 at 
 kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:286)
 at kafka.server.KafkaApis.handleProducerRequest(KafkaApis.scala:270)
 at kafka.server.KafkaApis.handle(KafkaApis.scala:57)
   

[jira] [Updated] (KAFKA-2316) Drop java 1.6 support

2015-07-08 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2316:

   Resolution: Fixed
Fix Version/s: 0.8.3
   Status: Resolved  (was: Patch Available)

 Drop java 1.6 support
 -

 Key: KAFKA-2316
 URL: https://issues.apache.org/jira/browse/KAFKA-2316
 Project: Kafka
  Issue Type: Bug
Reporter: Sriharsha Chintalapani
Assignee: Sriharsha Chintalapani
 Fix For: 0.8.3

 Attachments: KAFKA-2316.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2316) Drop java 1.6 support

2015-07-08 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618976#comment-14618976
 ] 

Gwen Shapira commented on KAFKA-2316:
-

+1. Committed and pushed to trunk.

 Drop java 1.6 support
 -

 Key: KAFKA-2316
 URL: https://issues.apache.org/jira/browse/KAFKA-2316
 Project: Kafka
  Issue Type: Bug
Reporter: Sriharsha Chintalapani
Assignee: Sriharsha Chintalapani
 Fix For: 0.8.3

 Attachments: KAFKA-2316.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2308) New producer + Snappy face un-compression errors after broker restart

2015-07-08 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618981#comment-14618981
 ] 

Gwen Shapira commented on KAFKA-2308:
-

Thanks :)

 New producer + Snappy face un-compression errors after broker restart
 -

 Key: KAFKA-2308
 URL: https://issues.apache.org/jira/browse/KAFKA-2308
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira
Assignee: Gwen Shapira
 Fix For: 0.8.3

 Attachments: KAFKA-2308.patch


 Looks like the new producer, when used with Snappy, following a broker 
 restart is sending messages the brokers can't decompress. This issue was 
 discussed at few mailing lists thread, but I don't think we ever resolved it.
 I can reproduce with trunk and Snappy 1.1.1.7. 
 To reproduce:
 1. Start 3 brokers
 2. Create a topic with 3 partitions and 3 replicas each.
 2. Start performance producer with --new-producer --compression-codec 2 (and 
 set the number of messages to fairly high, to give you time. I went with 10M)
 3. Bounce one of the brokers
 4. The log of one of the surviving nodes should contain errors like:
 {code}
 2015-07-02 13:45:59,300 ERROR kafka.server.ReplicaManager: [Replica Manager 
 on Broker 66]: Error processing append operation on partition [t3,0]
 kafka.common.KafkaException:
 at 
 kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:94)
 at 
 kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:64)
 at 
 kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66)
 at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)
 at 
 kafka.message.ByteBufferMessageSet$$anon$2.innerDone(ByteBufferMessageSet.scala:177)
 at 
 kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:218)
 at 
 kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:173)
 at 
 kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66)
 at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)
 at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
 at scala.collection.Iterator$class.foreach(Iterator.scala:727)
 at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
 at 
 scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
 at 
 scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
 at 
 scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
 at 
 scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
 at scala.collection.AbstractIterator.to(Iterator.scala:1157)
 at 
 scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
 at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
 at 
 kafka.message.ByteBufferMessageSet.validateMessagesAndAssignOffsets(ByteBufferMessageSet.scala:267)
 at kafka.log.Log.liftedTree1$1(Log.scala:327)
 at kafka.log.Log.append(Log.scala:326)
 at 
 kafka.cluster.Partition$$anonfun$appendMessagesToLeader$1.apply(Partition.scala:423)
 at 
 kafka.cluster.Partition$$anonfun$appendMessagesToLeader$1.apply(Partition.scala:409)
 at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
 at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:268)
 at kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:409)
 at 
 kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:365)
 at 
 kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:350)
 at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
 at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
 at 
 scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
 at 
 scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
 at 
 scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
 at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
 at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
 at 
 scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
 at scala.collection.AbstractTraversable.map(Traversable.scala:105)
 at 
 kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:350)
 at 
 kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:286)
 at kafka.server.KafkaApis.handleProducerRequest(KafkaApis.scala:270)
 at 

Build failed in Jenkins: KafkaPreCommit #142

2015-07-08 Thread Apache Jenkins Server
See https://builds.apache.org/job/KafkaPreCommit/142/changes

Changes:

[wangguoz] KAFKA-2308: make MemoryRecords idempotent; reviewed by Guozhang Wang

[cshapi] KAFKA-2316: Drop java 1.6 support; patched by Sriharsha Chintalapani 
reviewed by Ismael Juma and Gwen Shapira

--
Started by an SCM change
Building remotely on H11 (Ubuntu ubuntu) in workspace 
https://builds.apache.org/job/KafkaPreCommit/ws/
Cloning the remote Git repository
Cloning repository https://git-wip-us.apache.org/repos/asf/kafka.git
  git init https://builds.apache.org/job/KafkaPreCommit/ws/ # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
  git --version # timeout=10
  git fetch --tags --progress 
  https://git-wip-us.apache.org/repos/asf/kafka.git 
  +refs/heads/*:refs/remotes/origin/*
  git config remote.origin.url 
  https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
  git config remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
  timeout=10
  git config remote.origin.url 
  https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
  git fetch --tags --progress 
  https://git-wip-us.apache.org/repos/asf/kafka.git 
  +refs/heads/*:refs/remotes/origin/*
  git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
  git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 7df39e0394a6fd2f26e8f12768ebf7fecd56e3da 
(refs/remotes/origin/trunk)
  git config core.sparsecheckout # timeout=10
  git checkout -f 7df39e0394a6fd2f26e8f12768ebf7fecd56e3da
  git rev-list 4204f4a06bf23160ceec4aa54331db62681bff82 # timeout=10
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
[KafkaPreCommit] $ /bin/bash -xe /tmp/hudson301654436593382.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.1/userguide/gradle_daemon.html.
Download 
https://repo1.maven.org/maven2/org/ajoberstar/grgit/0.2.3/grgit-0.2.3.pom
Download 
https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit/3.3.0.201403021825-r/org.eclipse.jgit-3.3.0.201403021825-r.pom
Download 
https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit-parent/3.3.0.201403021825-r/org.eclipse.jgit-parent-3.3.0.201403021825-r.pom
Download 
https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit.ui/3.3.0.201403021825-r/org.eclipse.jgit.ui-3.3.0.201403021825-r.pom
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.jsch/0.0.7/jsch.agentproxy.jsch-0.0.7.pom
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy/0.0.7/jsch.agentproxy-0.0.7.pom
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.pageant/0.0.7/jsch.agentproxy.pageant-0.0.7.pom
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.sshagent/0.0.7/jsch.agentproxy.sshagent-0.0.7.pom
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.usocket-jna/0.0.7/jsch.agentproxy.usocket-jna-0.0.7.pom
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.usocket-nc/0.0.7/jsch.agentproxy.usocket-nc-0.0.7.pom
Download https://repo1.maven.org/maven2/com/jcraft/jsch/0.1.46/jsch-0.1.46.pom
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.core/0.0.7/jsch.agentproxy.core-0.0.7.pom
Download 
https://repo1.maven.org/maven2/org/ajoberstar/grgit/0.2.3/grgit-0.2.3.jar
Download 
https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit/3.3.0.201403021825-r/org.eclipse.jgit-3.3.0.201403021825-r.jar
Download 
https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit.ui/3.3.0.201403021825-r/org.eclipse.jgit.ui-3.3.0.201403021825-r.jar
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.jsch/0.0.7/jsch.agentproxy.jsch-0.0.7.jar
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.pageant/0.0.7/jsch.agentproxy.pageant-0.0.7.jar
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.sshagent/0.0.7/jsch.agentproxy.sshagent-0.0.7.jar
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.usocket-jna/0.0.7/jsch.agentproxy.usocket-jna-0.0.7.jar
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.usocket-nc/0.0.7/jsch.agentproxy.usocket-nc-0.0.7.jar
Download 
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.core/0.0.7/jsch.agentproxy.core-0.0.7.jar
Download 
https://repo1.maven.org/maven2/org/apache/httpcomponents/httpcore/4.1.4/httpcore-4.1.4.jar
Building project 'core' with Scala version 2.10.5
:downloadWrapper

BUILD SUCCESSFUL

Total time: 24.655 secs
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
[KafkaPreCommit] $ /bin/bash -xe 

[jira] [Commented] (KAFKA-2298) Client Selector can drop connections on InvalidReceiveException without notifying NetworkClient

2015-07-08 Thread Dong Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14619059#comment-14619059
 ] 

Dong Lin commented on KAFKA-2298:
-

[~sriharsha] Sorry for the inconvenience. But the patch is already committed 
yesterday. What should we do to put this in your patch to save the merge issue?

 Client Selector can drop connections on InvalidReceiveException without 
 notifying NetworkClient
 ---

 Key: KAFKA-2298
 URL: https://issues.apache.org/jira/browse/KAFKA-2298
 Project: Kafka
  Issue Type: Bug
Reporter: Dong Lin
Assignee: Dong Lin
  Labels: quotas
 Attachments: KAFKA-2298.patch, KAFKA-2298_2015-06-23_18:47:54.patch, 
 KAFKA-2298_2015-06-24_13:00:39.patch


 I run into the problem described in KAFKA-2266 when testing quota. I was told 
 the bug was fixed in KAFKA-2266 after I figured out the problem.
 But the patch provided in KAFKA-2266 probably doesn't solve all related 
 problems. From reading the code there is still one edge case where the client 
 selector can close connection in poll() without notifying NetworkClient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2298) Client Selector can drop connections on InvalidReceiveException without notifying NetworkClient

2015-07-08 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14619060#comment-14619060
 ] 

Sriharsha Chintalapani commented on KAFKA-2298:
---

[~lindong] ahh never mind than :). I'll do the merge . Thanks.

 Client Selector can drop connections on InvalidReceiveException without 
 notifying NetworkClient
 ---

 Key: KAFKA-2298
 URL: https://issues.apache.org/jira/browse/KAFKA-2298
 Project: Kafka
  Issue Type: Bug
Reporter: Dong Lin
Assignee: Dong Lin
  Labels: quotas
 Attachments: KAFKA-2298.patch, KAFKA-2298_2015-06-23_18:47:54.patch, 
 KAFKA-2298_2015-06-24_13:00:39.patch


 I run into the problem described in KAFKA-2266 when testing quota. I was told 
 the bug was fixed in KAFKA-2266 after I figured out the problem.
 But the patch provided in KAFKA-2266 probably doesn't solve all related 
 problems. From reading the code there is still one edge case where the client 
 selector can close connection in poll() without notifying NetworkClient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Dropping support for Scala 2.9.x

2015-07-08 Thread Ashish Singh
+1

On Wed, Jul 8, 2015 at 9:52 AM, Guozhang Wang wangg...@gmail.com wrote:

 +1.

 Scala 2.9 has been 4 years old and I think it is time to drop it.

 On Wed, Jul 8, 2015 at 7:22 AM, Grant Henke ghe...@cloudera.com wrote:

  +1 for dropping 2.9
 
  On Wed, Jul 8, 2015 at 9:15 AM, Sriharsha Chintalapani ka...@harsha.io
  wrote:
 
   I am +1 on dropping 2.9.x support.
  
   Thanks,
   Harsha
  
  
   On July 8, 2015 at 7:08:12 AM, Ismael Juma (mli...@juma.me.uk) wrote:
  
   Hi,
  
   The responses in this thread were positive, but there weren't many. A
 few
   months passed and Sriharsha encouraged me to reopen the thread given
 that
   the 2.9 build has been broken for at least a week[1] and no-one seemed
 to
   notice.
  
   Do we want to invest more time so that the 2.9 build continues to work
 or
   do we want to focus our efforts on 2.10 and 2.11? Please share your
   opinion.
  
   Best,
   Ismael
  
   [1] https://issues.apache.org/jira/browse/KAFKA-2325
  
   On Fri, Mar 27, 2015 at 2:20 PM, Ismael Juma mli...@juma.me.uk
 wrote:
  
Hi all,
   
The Kafka build currently includes support for Scala 2.9, which means
   that
it cannot take advantage of features introduced in Scala 2.10 or
 depend
   on
libraries that require it.
   
This restricts the solutions available while trying to solve existing
issues. I was browsing JIRA looking for areas to contribute and I
  quickly
ran into two issues where this is the case:
   
* KAFKA-1351: String.format is very expensive in Scala could be
  solved
nicely by using the String interpolation feature introduced in Scala
   2.10.
   
* KAFKA-1595: Remove deprecated and slower scala JSON parser from
kafka.consumer.TopicCount could be solved by using an existing JSON
library, but both jackson-scala and play-json require 2.10 (argonaut
supports Scala 2.9, but it brings other dependencies like scalaz). We
  can
workaround this by writing our own code instead of using libraries,
 of
course, but it's not ideal.
   
Other features like Scala Futures and value classes would also be
  useful
in some situations, I would think (for a more extensive list of new
features, see
   
  
 
 http://scala-language.1934581.n4.nabble.com/Scala-2-10-0-now-available-td4634126.html
).
   
Another pain point of supporting 2.9.x is that it doubles the number
 of
build and test configurations required from 2 to 4 (because the 2.9.x
series was not necessarily binary compatible).
   
A strong argument for maintaining support for 2.9.x was the client
library, but that has been rewritten in Java.
   
It's also worth mentioning that Scala 2.9.1 was released in August
 2011
(more than 3.5 years ago) and the 2.9.x series hasn't received
 updates
  of
any sort since early 2013. Scala 2.10.0, in turn, was released in
  January
2013 (over 2 years ago) and 2.10.5, the last planned release in the
   2.10.x
series, has been recently released (so even 2.10.x won't be receiving
updates any longer).
   
All in all, I think it would not be unreasonable to drop support for
   Scala
2.9.x in a future release, but I may be missing something. What do
  others
think?
   
Ismael
   
  
 
 
 
  --
  Grant Henke
  Solutions Consultant | Cloudera
  ghe...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke
 



 --
 -- Guozhang




-- 

Regards,
Ashish


[jira] [Commented] (KAFKA-2316) Drop java 1.6 support

2015-07-08 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14619087#comment-14619087
 ] 

Ismael Juma commented on KAFKA-2316:


It looks like this broke Jenkins:

https://builds.apache.org/job/Kafka-trunk/534/console

It works locally for me, so it looks like the Jenkins job needs to be updated 
somehow.

 Drop java 1.6 support
 -

 Key: KAFKA-2316
 URL: https://issues.apache.org/jira/browse/KAFKA-2316
 Project: Kafka
  Issue Type: Bug
Reporter: Sriharsha Chintalapani
Assignee: Sriharsha Chintalapani
 Fix For: 0.8.3

 Attachments: KAFKA-2316.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Kafka-trunk #536

2015-07-08 Thread Apache Jenkins Server
See https://builds.apache.org/job/Kafka-trunk/536/

--
[...truncated 4674 lines...]
kafka.coordinator.ConsumerGroupMetadataTest  
testPreparingRebalanceToDeadTransition PASSED

kafka.coordinator.ConsumerGroupMetadataTest  testRebalancingToStableTransition 
PASSED

kafka.coordinator.ConsumerGroupMetadataTest  
testStableToStableIllegalTransition PASSED

kafka.coordinator.ConsumerGroupMetadataTest  
testCannotRebalanceWhenRebalancing PASSED

kafka.log.BrokerCompressionTest  testBrokerSideCompression[0] PASSED

kafka.log.BrokerCompressionTest  testBrokerSideCompression[1] PASSED

kafka.log.BrokerCompressionTest  testBrokerSideCompression[2] PASSED

kafka.log.BrokerCompressionTest  testBrokerSideCompression[3] PASSED

kafka.log.BrokerCompressionTest  testBrokerSideCompression[4] PASSED

kafka.log.BrokerCompressionTest  testBrokerSideCompression[5] PASSED

kafka.log.BrokerCompressionTest  testBrokerSideCompression[6] PASSED

kafka.log.BrokerCompressionTest  testBrokerSideCompression[7] PASSED

kafka.log.BrokerCompressionTest  testBrokerSideCompression[8] PASSED

kafka.log.BrokerCompressionTest  testBrokerSideCompression[9] PASSED

kafka.log.BrokerCompressionTest  testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest  testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest  testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest  testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest  testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest  testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest  testBrokerSideCompression[16] PASSED

kafka.log.BrokerCompressionTest  testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest  testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest  testBrokerSideCompression[19] PASSED

kafka.log.OffsetMapTest  testClear PASSED

kafka.log.OffsetMapTest  testBasicValidation PASSED

kafka.log.LogManagerTest  testRecoveryDirectoryMappingWithRelativeDirectory 
PASSED

kafka.log.LogManagerTest  testCreateLog PASSED

kafka.log.LogManagerTest  testGetNonExistentLog PASSED

kafka.log.LogManagerTest  testCleanupExpiredSegments PASSED

kafka.log.LogManagerTest  testTimeBasedFlush PASSED

kafka.log.LogManagerTest  testCleanupSegmentsToMaintainSize PASSED

kafka.log.LogManagerTest  testLeastLoadedAssignment PASSED

kafka.log.LogManagerTest  testTwoLogManagersUsingSameDirFails PASSED

kafka.log.LogManagerTest  testCheckpointRecoveryPoints PASSED

kafka.log.LogManagerTest  testRecoveryDirectoryMappingWithTrailingSlash PASSED

kafka.log.LogSegmentTest  testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest  testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest  testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest  testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest  testMaxOffset PASSED

kafka.log.LogSegmentTest  testReadAfterLast PASSED

kafka.log.LogSegmentTest  testReadFromGap PASSED

kafka.log.LogSegmentTest  testTruncate PASSED

kafka.log.LogSegmentTest  testTruncateFull PASSED

kafka.log.LogSegmentTest  testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest  testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest  testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest  testRecoveryWithCorruptMessage PASSED

kafka.log.FileMessageSetTest  testRead PASSED

kafka.log.FileMessageSetTest  testSearch PASSED

kafka.log.FileMessageSetTest  testIteratorWithLimits PASSED

kafka.log.FileMessageSetTest  testPreallocateTrue PASSED

kafka.log.FileMessageSetTest  testPreallocateFalse PASSED

kafka.log.FileMessageSetTest  testPreallocateClearShutdown PASSED

kafka.log.FileMessageSetTest  testIteratorIsConsistent PASSED

kafka.log.FileMessageSetTest  testWriteTo PASSED

kafka.log.FileMessageSetTest  testFileSize PASSED

kafka.log.FileMessageSetTest  testWrittenEqualsRead PASSED

kafka.log.FileMessageSetTest  testIterationOverPartialAndTruncation PASSED

kafka.log.FileMessageSetTest  testIterationDoesntChangePosition PASSED

kafka.log.FileMessageSetTest  testSizeInBytes PASSED

kafka.log.FileMessageSetTest  testTruncate PASSED

kafka.log.LogCleanerIntegrationTest  cleanerTest[0] PASSED

kafka.log.LogCleanerIntegrationTest  cleanerTest[1] PASSED

kafka.log.LogCleanerIntegrationTest  cleanerTest[2] PASSED

kafka.log.LogCleanerIntegrationTest  cleanerTest[3] PASSED

kafka.log.LogTest  testTimeBasedLogRoll PASSED

kafka.log.LogTest  testIndexResizingAtTruncation PASSED

kafka.log.LogTest  testIndexRebuild PASSED

kafka.log.LogTest  testCorruptIndexRebuild PASSED

kafka.log.LogTest  testTruncateTo PASSED

kafka.log.LogTest  testAsyncDelete PASSED

kafka.log.LogTest  testAppendAndReadWithSequentialOffsets PASSED

kafka.log.LogTest  testAppendAndReadWithNonSequentialOffsets PASSED

kafka.log.LogTest  testLogRolls PASSED

kafka.log.LogTest  testOpenDeletesObsoleteFiles PASSED

kafka.log.LogTest  

[jira] [Commented] (KAFKA-2123) Make new consumer offset commit API use callback + future

2015-07-08 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14619432#comment-14619432
 ] 

Jason Gustafson commented on KAFKA-2123:


[~ewencp], [~guozhang], apologies for the big commit, but I saw an opportunity 
to consolidate some of the features from Ewen's previous patch with those 
introduced in KAFKA-2168 into a general ConsumerNetworkClient. This allowed me 
to push the retry logic that spilled into KafkaConsumer back into Coordinator 
and Fetcher while still ensuring that consumer.wakeup() would continue to work 
and scheduled tasks (such as auto-commits and heartbeats) would get executed at 
the right time regardless of where the thread of execution was. I have not, 
however, preserved the commit queue from Ewen's initial patches, but we can 
bring it back if we think it adds a lot of value.

 Make new consumer offset commit API use callback + future
 -

 Key: KAFKA-2123
 URL: https://issues.apache.org/jira/browse/KAFKA-2123
 Project: Kafka
  Issue Type: Sub-task
  Components: clients, consumer
Reporter: Ewen Cheslack-Postava
Assignee: Ewen Cheslack-Postava
Priority: Critical
 Fix For: 0.8.3

 Attachments: KAFKA-2123.patch, KAFKA-2123.patch, 
 KAFKA-2123_2015-04-30_11:23:05.patch, KAFKA-2123_2015-05-01_19:33:19.patch, 
 KAFKA-2123_2015-05-04_09:39:50.patch, KAFKA-2123_2015-05-04_22:51:48.patch, 
 KAFKA-2123_2015-05-29_11:11:05.patch


 The current version of the offset commit API in the new consumer is
 void commit(offsets, commit type)
 where the commit type is either sync or async. This means you need to use 
 sync if you ever want confirmation that the commit succeeded. Some 
 applications will want to use asynchronous offset commit, but be able to tell 
 when the commit completes.
 This is basically the same problem that had to be fixed going from old 
 consumer - new consumer and I'd suggest the same fix using a callback + 
 future combination. The new API would be
 FutureVoid commit(MapTopicPartition, Long offsets, ConsumerCommitCallback 
 callback);
 where ConsumerCommitCallback contains a single method:
 public void onCompletion(Exception exception);
 We can provide shorthand variants of commit() for eliding the different 
 arguments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2316) Drop java 1.6 support

2015-07-08 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14619199#comment-14619199
 ] 

Ismael Juma commented on KAFKA-2316:


Thank you [~jarcec], any chance you could also update 
https://builds.apache.org/job/KafkaPreCommit (if you haven't already)?

 Drop java 1.6 support
 -

 Key: KAFKA-2316
 URL: https://issues.apache.org/jira/browse/KAFKA-2316
 Project: Kafka
  Issue Type: Bug
Reporter: Sriharsha Chintalapani
Assignee: Sriharsha Chintalapani
 Fix For: 0.8.3

 Attachments: KAFKA-2316.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2316) Drop java 1.6 support

2015-07-08 Thread Jarek Jarcec Cecho (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14619271#comment-14619271
 ] 

Jarek Jarcec Cecho commented on KAFKA-2316:
---

I've just updated the second job as well [~ijuma].

 Drop java 1.6 support
 -

 Key: KAFKA-2316
 URL: https://issues.apache.org/jira/browse/KAFKA-2316
 Project: Kafka
  Issue Type: Bug
Reporter: Sriharsha Chintalapani
Assignee: Sriharsha Chintalapani
 Fix For: 0.8.3

 Attachments: KAFKA-2316.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 36333: Patch for KAFKA-2123

2015-07-08 Thread Jason Gustafson

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36333/
---

Review request for kafka.


Bugs: KAFKA-2123
https://issues.apache.org/jira/browse/KAFKA-2123


Repository: kafka


Description
---

KAFKA-2123; resolve problems from rebase


Diffs
-

  clients/src/main/java/org/apache/kafka/clients/consumer/Consumer.java 
fd98740bff175cc9d5bc02e365d88e011ef65d22 
  
clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerCommitCallback.java
 PRE-CREATION 
  clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRecords.java 
eb75d2e797e3aa3992e4cf74b12f51c8f1545e02 
  clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
7aa076084c894bb8f47b9df2c086475b06f47060 
  clients/src/main/java/org/apache/kafka/clients/consumer/MockConsumer.java 
46e26a665a22625d50888efa7b53472279f36e79 
  
clients/src/main/java/org/apache/kafka/clients/consumer/internals/ConsumerNetworkClient.java
 PRE-CREATION 
  
clients/src/main/java/org/apache/kafka/clients/consumer/internals/Coordinator.java
 c1c8172cd45f6715262f9a6f497a7b1797a834a3 
  
clients/src/main/java/org/apache/kafka/clients/consumer/internals/DelayedTask.java
 PRE-CREATION 
  
clients/src/main/java/org/apache/kafka/clients/consumer/internals/DelayedTaskQueue.java
 PRE-CREATION 
  
clients/src/main/java/org/apache/kafka/clients/consumer/internals/Fetcher.java 
695eaf63db9a5fa20dc2ca68957901462a96cd96 
  
clients/src/main/java/org/apache/kafka/clients/consumer/internals/Heartbeat.java
 51eae1944d5c17cf838be57adf560bafe36fbfbd 
  
clients/src/main/java/org/apache/kafka/clients/consumer/internals/NoAvailableBrokersException.java
 PRE-CREATION 
  
clients/src/main/java/org/apache/kafka/clients/consumer/internals/RequestFuture.java
 13fc9af7392b4ade958daf3b0c9a165ddda351a6 
  
clients/src/main/java/org/apache/kafka/clients/consumer/internals/RequestFutureAdapter.java
 PRE-CREATION 
  
clients/src/main/java/org/apache/kafka/clients/consumer/internals/RequestFutureListener.java
 PRE-CREATION 
  
clients/src/main/java/org/apache/kafka/clients/consumer/internals/SendFailedException.java
 PRE-CREATION 
  
clients/src/main/java/org/apache/kafka/clients/consumer/internals/StaleMetadataException.java
 PRE-CREATION 
  
clients/src/main/java/org/apache/kafka/clients/consumer/internals/SubscriptionState.java
 683745304c671952ff566f23b5dd4cf3ab75377a 
  
clients/src/main/java/org/apache/kafka/common/errors/ConsumerCoordinatorNotAvailableException.java
 PRE-CREATION 
  clients/src/main/java/org/apache/kafka/common/errors/DisconnectException.java 
PRE-CREATION 
  
clients/src/main/java/org/apache/kafka/common/errors/IllegalGenerationException.java
 PRE-CREATION 
  
clients/src/main/java/org/apache/kafka/common/errors/NotCoordinatorForConsumerException.java
 PRE-CREATION 
  
clients/src/main/java/org/apache/kafka/common/errors/OffsetLoadInProgressException.java
 PRE-CREATION 
  
clients/src/main/java/org/apache/kafka/common/errors/UnknownConsumerIdException.java
 PRE-CREATION 
  clients/src/main/java/org/apache/kafka/common/protocol/Errors.java 
4c0ecc3badd99727b5bd9d430364e61c184e0923 
  
clients/src/test/java/org/apache/kafka/clients/consumer/internals/ConsumerNetworkClientTest.java
 PRE-CREATION 
  
clients/src/test/java/org/apache/kafka/clients/consumer/internals/CoordinatorTest.java
 d085fe5c9e2a0567893508a1c71f014fae6d7510 
  
clients/src/test/java/org/apache/kafka/clients/consumer/internals/DelayedTaskQueueTest.java
 PRE-CREATION 
  
clients/src/test/java/org/apache/kafka/clients/consumer/internals/FetcherTest.java
 405efdc7a59438731cbc3630876bda0042a3adb3 
  
clients/src/test/java/org/apache/kafka/clients/consumer/internals/HeartbeatTest.java
 ee1ede01efa070409b86f5d8874cd578e058ce51 
  
clients/src/test/java/org/apache/kafka/clients/consumer/internals/RequestFutureTest.java
 PRE-CREATION 
  core/src/main/scala/kafka/coordinator/ConsumerCoordinator.scala 
476973b2c551db5be3f1c54f94990f0dd15ff65e 
  core/src/test/scala/integration/kafka/api/ConsumerTest.scala 
92ffb91b5e039dc0d4cd0e072ca46db32f280cf9 

Diff: https://reviews.apache.org/r/36333/diff/


Testing
---


Thanks,

Jason Gustafson



[jira] [Commented] (KAFKA-2123) Make new consumer offset commit API use callback + future

2015-07-08 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14619386#comment-14619386
 ] 

Jason Gustafson commented on KAFKA-2123:


Created reviewboard https://reviews.apache.org/r/36333/diff/
 against branch upstream/trunk

 Make new consumer offset commit API use callback + future
 -

 Key: KAFKA-2123
 URL: https://issues.apache.org/jira/browse/KAFKA-2123
 Project: Kafka
  Issue Type: Sub-task
  Components: clients, consumer
Reporter: Ewen Cheslack-Postava
Assignee: Ewen Cheslack-Postava
Priority: Critical
 Fix For: 0.8.3

 Attachments: KAFKA-2123.patch, KAFKA-2123.patch, 
 KAFKA-2123_2015-04-30_11:23:05.patch, KAFKA-2123_2015-05-01_19:33:19.patch, 
 KAFKA-2123_2015-05-04_09:39:50.patch, KAFKA-2123_2015-05-04_22:51:48.patch, 
 KAFKA-2123_2015-05-29_11:11:05.patch


 The current version of the offset commit API in the new consumer is
 void commit(offsets, commit type)
 where the commit type is either sync or async. This means you need to use 
 sync if you ever want confirmation that the commit succeeded. Some 
 applications will want to use asynchronous offset commit, but be able to tell 
 when the commit completes.
 This is basically the same problem that had to be fixed going from old 
 consumer - new consumer and I'd suggest the same fix using a callback + 
 future combination. The new API would be
 FutureVoid commit(MapTopicPartition, Long offsets, ConsumerCommitCallback 
 callback);
 where ConsumerCommitCallback contains a single method:
 public void onCompletion(Exception exception);
 We can provide shorthand variants of commit() for eliding the different 
 arguments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2123) Make new consumer offset commit API use callback + future

2015-07-08 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-2123:
---
Attachment: KAFKA-2123.patch

 Make new consumer offset commit API use callback + future
 -

 Key: KAFKA-2123
 URL: https://issues.apache.org/jira/browse/KAFKA-2123
 Project: Kafka
  Issue Type: Sub-task
  Components: clients, consumer
Reporter: Ewen Cheslack-Postava
Assignee: Ewen Cheslack-Postava
Priority: Critical
 Fix For: 0.8.3

 Attachments: KAFKA-2123.patch, KAFKA-2123.patch, 
 KAFKA-2123_2015-04-30_11:23:05.patch, KAFKA-2123_2015-05-01_19:33:19.patch, 
 KAFKA-2123_2015-05-04_09:39:50.patch, KAFKA-2123_2015-05-04_22:51:48.patch, 
 KAFKA-2123_2015-05-29_11:11:05.patch


 The current version of the offset commit API in the new consumer is
 void commit(offsets, commit type)
 where the commit type is either sync or async. This means you need to use 
 sync if you ever want confirmation that the commit succeeded. Some 
 applications will want to use asynchronous offset commit, but be able to tell 
 when the commit completes.
 This is basically the same problem that had to be fixed going from old 
 consumer - new consumer and I'd suggest the same fix using a callback + 
 future combination. The new API would be
 FutureVoid commit(MapTopicPartition, Long offsets, ConsumerCommitCallback 
 callback);
 where ConsumerCommitCallback contains a single method:
 public void onCompletion(Exception exception);
 We can provide shorthand variants of commit() for eliding the different 
 arguments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2310) Add config to prevent broker becoming controller

2015-07-08 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618034#comment-14618034
 ] 

Joe Stein commented on KAFKA-2310:
--

Hey [~jjkoshy] I had suggested to Andrii that it might make sense to make this 
a new ticket. The old ticket (which I think we should close) had the idea that 
we wanted to re-elect the controller. This would be problematic for what is 
trying to solve based on what we have seen in the field. e.g. if you have 12 
brokers and they are under heavy load then providing a way to bounce the 
controller around is going to help if when it gets to a broker it can't 
perform its responsibilities sufficiently.  The consensus I have been able to 
get from ops folks is that separating/isolating the controller onto two brokers 
on two (for redundancy) lower end equipment solve the problem fully. Since this 
is just another config I didn't think that i needed it KIP but honestly wasn't 
100% sure otherwise would have already committed this feature. The purpose of 
the patch for different versions is because I know a bunch of folks that are 
going to take it for the version of kafka they are using and start using the 
feature.

 Add config to prevent broker becoming controller
 

 Key: KAFKA-2310
 URL: https://issues.apache.org/jira/browse/KAFKA-2310
 Project: Kafka
  Issue Type: Bug
Reporter: Andrii Biletskyi
Assignee: Andrii Biletskyi
 Attachments: KAFKA-2310.patch, KAFKA-2310_0.8.1.patch, 
 KAFKA-2310_0.8.2.patch


 The goal is to be able to specify which cluster brokers can serve as a 
 controller and which cannot. This way it will be possible to reserve 
 particular, not overloaded with partitions and other operations, broker as 
 controller.
 Proposed to add config _controller.eligibility_ defaulted to true (for 
 backward compatibility, since now any broker can become a controller)
 Patch will be available for trunk, 0.8.2 and 0.8.1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Build failed in Jenkins: KafkaPreCommit #138

2015-07-08 Thread Ewen Cheslack-Postava
Ah, that makes sense, thanks for finding that. We could alternatively use
build/** to avoid everything under build, or include a more complete list
via .gitignore instead (although that would have to be done carefully since
the pattern matching works differently). Unfortunately grgit is required to
filter out files that aren't covered by .gitignore and untracked by git.

-Ewen

On Tue, Jul 7, 2015 at 10:35 PM, Joel Koshy jjkosh...@gmail.com wrote:

  explicitly specify an exclude for build/rat/rat-report.xml

 (btw, I went ahead with a trivial commit for the above - let me know
 if there are any concerns)

 On Tue, Jul 07, 2015 at 09:44:27PM -0700, Joel Koshy wrote:
  You can reproduce the rat failure by running: ./gradlew clean 
  gradlew test
 
  If you run gradlew test again it does not report any error.
 
  build.gradle uses Grgit to expand the gitignore files to exclude.  For
  a clean build, the build directory does not exist (yet). So it is not
  excluded by rat.  On subsequent builds it does exist, so it is
  excluded.
 
  The easiest fix is to probably explicitly specify an exclude for
  build/rat/rat-report.xml
 
  Thanks,
 
  Joel
 
  On Tue, Jul 07, 2015 at 05:12:12PM -0700, Jun Rao wrote:
   Ewen,
  
   We did have at least one successful run (
   https://builds.apache.org/job/Kafka-trunk/526/console) after the rat
 patch.
   Perhaps you can file an Apache infra ticket to figure out why other
 runs
   fail?
  
   Thanks,
  
   Jun
  
   On Tue, Jul 7, 2015 at 11:16 AM, Ewen Cheslack-Postava 
 e...@confluent.io
   wrote:
  
Is there a way to run the Jenkins build with more detailed output,
 e.g.
with one of gradle's debug flags? It looks like in the Jenkins
 environment
the exclude rules (which should be ignoring the build/ directory)
 aren't
working like they did for me and the reviewers, but there's not
 enough
useful info here to figure out why. Although even with those turned
 on, I'm
not sure the right info is logged...
   
-Ewen
   
On Tue, Jul 7, 2015 at 10:39 AM, Apache Jenkins Server 
jenk...@builds.apache.org wrote:
   
 See https://builds.apache.org/job/KafkaPreCommit/138/changes

 Changes:

 [junrao] kafka-1367; Broker topic metadata not kept in sync with
 ZooKeeper; patched by Ashish Singh; reviewed by Jun Rao

 [joe.stein] KAFKA-2304 Supported enabling JMX in Kafka Vagrantfile
 patch
 by Stevo Slavic reviewed by Ewen Cheslack-Postava

 --
 Started by an SCM change
 Building remotely on ubuntu-4 (docker Ubuntu ubuntu4 ubuntu) in
workspace 
 https://builds.apache.org/job/KafkaPreCommit/ws/
   git rev-parse --is-inside-work-tree # timeout=10
 Fetching changes from the remote Git repository
   git config remote.origin.url
 https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
 Fetching upstream changes from
 https://git-wip-us.apache.org/repos/asf/kafka.git
   git --version # timeout=10
   git fetch --tags --progress
 https://git-wip-us.apache.org/repos/asf/kafka.git
 +refs/heads/*:refs/remotes/origin/*
   git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
   git rev-parse refs/remotes/origin/origin/trunk^{commit} #
 timeout=10
 Checking out Revision ad485e148d7ac104abe173687ba27dccff8e4d39
 (refs/remotes/origin/trunk)
   git config core.sparsecheckout # timeout=10
   git checkout -f ad485e148d7ac104abe173687ba27dccff8e4d39
   git rev-list 271b18d119fdc37952c36c573ba185aa672e3f96 #
 timeout=10
 Setting

   
 GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
 [KafkaPreCommit] $ /bin/bash -xe /tmp/hudson8684664270792626732.sh
 +

   
 /home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1/bin/gradle
 To honour the JVM settings for this build a new JVM will be forked.
Please
 consider using the daemon:
 http://gradle.org/docs/2.1/userguide/gradle_daemon.html.
 Download

   
 https://repo1.maven.org/maven2/org/ajoberstar/grgit/0.2.3/grgit-0.2.3.pom
 Download

   
 https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit/3.3.0.201403021825-r/org.eclipse.jgit-3.3.0.201403021825-r.pom
 Download

   
 https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit-parent/3.3.0.201403021825-r/org.eclipse.jgit-parent-3.3.0.201403021825-r.pom
 Download

   
 https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit.ui/3.3.0.201403021825-r/org.eclipse.jgit.ui-3.3.0.201403021825-r.pom
 Download

   
 https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.jsch/0.0.7/jsch.agentproxy.jsch-0.0.7.pom
 Download

   
 https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy/0.0.7/jsch.agentproxy-0.0.7.pom
 Download

   
 

Re: Review Request 36030: Patch for KAFKA-972

2015-07-08 Thread Ashish Singh


 On July 8, 2015, 12:52 a.m., Jun Rao wrote:
  Thanks for the patch. Saw the following transient unit test failure.
  
  kafka.integration.TopicMetadataTest  
  testIsrAfterBrokerShutDownAndJoinsBack FAILED
  junit.framework.AssertionFailedError: Topic metadata is not correctly 
  updated for broker kafka.server.KafkaServer@5df78dd0.
  Expected ISR: List(BrokerEndPoint(0,localhost,59755), 
  BrokerEndPoint(1,localhost,59758), BrokerEndPoint(2,localhost,59762), 
  BrokerEndPoint(3,localhost,59789))
  Actual ISR  : 
  at junit.framework.Assert.fail(Assert.java:47)
  at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:619)
  at 
  kafka.integration.TopicMetadataTest$$anonfun$checkIsr$1.apply(TopicMetadataTest.scala:149)
  at 
  kafka.integration.TopicMetadataTest$$anonfun$checkIsr$1.apply(TopicMetadataTest.scala:147)
  at scala.collection.immutable.List.foreach(List.scala:318)
  at 
  kafka.integration.TopicMetadataTest.checkIsr(TopicMetadataTest.scala:147)
  at 
  kafka.integration.TopicMetadataTest.testIsrAfterBrokerShutDownAndJoinsBack(TopicMetadataTest.scala:190)

I tried it a few times before submitting the patch and it passed. Wait time for 
ISR to get updated on all brokers running on same machine will probably be 
proportional to number of brokers, so I made wait time 5000L * numConfigs. 
However, it is not necessary that ISRs will be updated in that time. I think we 
have two options, first option is to increase the wait time, second option is 
to limit the number of brokers to 2 in the test case. I prefer second option, 
as the test case only requires 2 brokers for the scenario it is testing and 
will avoid making the test take longer time than an unit should be taking. I 
ran the test 20 times back to back with second approach and it works fine. Let 
me know if it still does not work for you.


- Ashish


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36030/#review90843
---


On July 7, 2015, 5:42 p.m., Ashish Singh wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/36030/
 ---
 
 (Updated July 7, 2015, 5:42 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-972
 https://issues.apache.org/jira/browse/KAFKA-972
 
 
 Repository: kafka
 
 
 Description
 ---
 
 KAFKA-972: MetadataRequest returns stale list of brokers
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/controller/KafkaController.scala 
 09630d07afc75d4b92c847e31032f8a1dfa0dabe 
   core/src/test/scala/unit/kafka/integration/TopicMetadataTest.scala 
 a95ee5e0849d29a5f95fdabed4f1988a308e9872 
 
 Diff: https://reviews.apache.org/r/36030/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Ashish Singh
 




Re: Review Request 36030: Patch for KAFKA-972

2015-07-08 Thread Ashish Singh

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36030/
---

(Updated July 8, 2015, 6:24 a.m.)


Review request for kafka.


Bugs: KAFKA-972
https://issues.apache.org/jira/browse/KAFKA-972


Repository: kafka


Description
---

KAFKA-972: MetadataRequest returns stale list of brokers


Diffs (updated)
-

  core/src/main/scala/kafka/controller/KafkaController.scala 
20f1499046c768adbcd2bf8ad5969589c8641f34 
  core/src/test/scala/unit/kafka/integration/TopicMetadataTest.scala 
a95ee5e0849d29a5f95fdabed4f1988a308e9872 

Diff: https://reviews.apache.org/r/36030/diff/


Testing
---


Thanks,

Ashish Singh



[jira] [Commented] (KAFKA-972) MetadataRequest returns stale list of brokers

2015-07-08 Thread Ashish K Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618050#comment-14618050
 ] 

Ashish K Singh commented on KAFKA-972:
--

Updated reviewboard https://reviews.apache.org/r/36030/
 against branch trunk

 MetadataRequest returns stale list of brokers
 -

 Key: KAFKA-972
 URL: https://issues.apache.org/jira/browse/KAFKA-972
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.8.0
Reporter: Vinicius Carvalho
Assignee: Ashish K Singh
 Attachments: BrokerMetadataTest.scala, KAFKA-972.patch, 
 KAFKA-972_2015-06-30_18:42:13.patch, KAFKA-972_2015-07-01_01:36:56.patch, 
 KAFKA-972_2015-07-01_01:42:42.patch, KAFKA-972_2015-07-01_08:06:03.patch, 
 KAFKA-972_2015-07-06_23:07:34.patch, KAFKA-972_2015-07-07_10:42:41.patch, 
 KAFKA-972_2015-07-07_23:24:13.patch


 When we issue an metadatarequest towards the cluster, the list of brokers is 
 stale. I mean, even when a broker is down, it's returned back to the client. 
 The following are examples of two invocations one with both brokers online 
 and the second with a broker down:
 {
 brokers: [
 {
 nodeId: 0,
 host: 10.139.245.106,
 port: 9092,
 byteLength: 24
 },
 {
 nodeId: 1,
 host: localhost,
 port: 9093,
 byteLength: 19
 }
 ],
 topicMetadata: [
 {
 topicErrorCode: 0,
 topicName: foozbar,
 partitions: [
 {
 replicas: [
 0
 ],
 isr: [
 0
 ],
 partitionErrorCode: 0,
 partitionId: 0,
 leader: 0,
 byteLength: 26
 },
 {
 replicas: [
 1
 ],
 isr: [
 1
 ],
 partitionErrorCode: 0,
 partitionId: 1,
 leader: 1,
 byteLength: 26
 },
 {
 replicas: [
 0
 ],
 isr: [
 0
 ],
 partitionErrorCode: 0,
 partitionId: 2,
 leader: 0,
 byteLength: 26
 },
 {
 replicas: [
 1
 ],
 isr: [
 1
 ],
 partitionErrorCode: 0,
 partitionId: 3,
 leader: 1,
 byteLength: 26
 },
 {
 replicas: [
 0
 ],
 isr: [
 0
 ],
 partitionErrorCode: 0,
 partitionId: 4,
 leader: 0,
 byteLength: 26
 }
 ],
 byteLength: 145
 }
 ],
 responseSize: 200,
 correlationId: -1000
 }
 {
 brokers: [
 {
 nodeId: 0,
 host: 10.139.245.106,
 port: 9092,
 byteLength: 24
 },
 {
 nodeId: 1,
 host: localhost,
 port: 9093,
 byteLength: 19
 }
 ],
 topicMetadata: [
 {
 topicErrorCode: 0,
 topicName: foozbar,
 partitions: [
 {
 replicas: [
 0
 ],
 isr: [],
 partitionErrorCode: 5,
 partitionId: 0,
 leader: -1,
 byteLength: 22
 },
 {
 replicas: [
 1
 ],
 isr: [
 1
 ],
 partitionErrorCode: 0,
 partitionId: 1,
 leader: 1,
 byteLength: 26
 },
 {
 replicas: [
 0
 ],
 isr: [],
 partitionErrorCode: 5,
 partitionId: 2,
 leader: -1,
 byteLength: 22
 },
 {
  

[jira] [Updated] (KAFKA-972) MetadataRequest returns stale list of brokers

2015-07-08 Thread Ashish K Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish K Singh updated KAFKA-972:
-
Attachment: KAFKA-972_2015-07-07_23:24:13.patch

 MetadataRequest returns stale list of brokers
 -

 Key: KAFKA-972
 URL: https://issues.apache.org/jira/browse/KAFKA-972
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.8.0
Reporter: Vinicius Carvalho
Assignee: Ashish K Singh
 Attachments: BrokerMetadataTest.scala, KAFKA-972.patch, 
 KAFKA-972_2015-06-30_18:42:13.patch, KAFKA-972_2015-07-01_01:36:56.patch, 
 KAFKA-972_2015-07-01_01:42:42.patch, KAFKA-972_2015-07-01_08:06:03.patch, 
 KAFKA-972_2015-07-06_23:07:34.patch, KAFKA-972_2015-07-07_10:42:41.patch, 
 KAFKA-972_2015-07-07_23:24:13.patch


 When we issue an metadatarequest towards the cluster, the list of brokers is 
 stale. I mean, even when a broker is down, it's returned back to the client. 
 The following are examples of two invocations one with both brokers online 
 and the second with a broker down:
 {
 brokers: [
 {
 nodeId: 0,
 host: 10.139.245.106,
 port: 9092,
 byteLength: 24
 },
 {
 nodeId: 1,
 host: localhost,
 port: 9093,
 byteLength: 19
 }
 ],
 topicMetadata: [
 {
 topicErrorCode: 0,
 topicName: foozbar,
 partitions: [
 {
 replicas: [
 0
 ],
 isr: [
 0
 ],
 partitionErrorCode: 0,
 partitionId: 0,
 leader: 0,
 byteLength: 26
 },
 {
 replicas: [
 1
 ],
 isr: [
 1
 ],
 partitionErrorCode: 0,
 partitionId: 1,
 leader: 1,
 byteLength: 26
 },
 {
 replicas: [
 0
 ],
 isr: [
 0
 ],
 partitionErrorCode: 0,
 partitionId: 2,
 leader: 0,
 byteLength: 26
 },
 {
 replicas: [
 1
 ],
 isr: [
 1
 ],
 partitionErrorCode: 0,
 partitionId: 3,
 leader: 1,
 byteLength: 26
 },
 {
 replicas: [
 0
 ],
 isr: [
 0
 ],
 partitionErrorCode: 0,
 partitionId: 4,
 leader: 0,
 byteLength: 26
 }
 ],
 byteLength: 145
 }
 ],
 responseSize: 200,
 correlationId: -1000
 }
 {
 brokers: [
 {
 nodeId: 0,
 host: 10.139.245.106,
 port: 9092,
 byteLength: 24
 },
 {
 nodeId: 1,
 host: localhost,
 port: 9093,
 byteLength: 19
 }
 ],
 topicMetadata: [
 {
 topicErrorCode: 0,
 topicName: foozbar,
 partitions: [
 {
 replicas: [
 0
 ],
 isr: [],
 partitionErrorCode: 5,
 partitionId: 0,
 leader: -1,
 byteLength: 22
 },
 {
 replicas: [
 1
 ],
 isr: [
 1
 ],
 partitionErrorCode: 0,
 partitionId: 1,
 leader: 1,
 byteLength: 26
 },
 {
 replicas: [
 0
 ],
 isr: [],
 partitionErrorCode: 5,
 partitionId: 2,
 leader: -1,
 byteLength: 22
 },
 {
 replicas: [
 1
 ],
 

[jira] [Commented] (KAFKA-1479) Logs filling up while Kafka ReplicaFetcherThread tries to retrieve partition info for deleted topics

2015-07-08 Thread Sandeep Bishnoi
I am observing log filling up on kafka_2.10-0.8.2.0.

Following are the logs on one of kafka node:
On Topic Creation:
[2015-07-07 20:28:17,579] INFO [ReplicaFetcherManager on broker 0] Removed
fetcher for partitions [tempv1,0] (kafka.server.ReplicaFetcherManager)
[2015-07-07 20:28:17,582] INFO Completed load of log tempv1-0 with log end
offset 0 (kafka.log.Log)
[2015-07-07 20:28:17,583] INFO Created log for partition [tempv1,0] in
/tmp/kafka-logs-server1 with properties {segment.index.bytes - 10485760,
file.delete.delay.ms - 6, segment.bytes - 536870912, flush.ms -
9223372036854775807, delete.retention.ms - 8640, index.interval.bytes
- 4096, retention.bytes - -1, min.insync.replicas - 1, cleanup.policy -
delete, unclean.leader.election.enable - true, segment.ms - 60480,
max.message.bytes - 112, flush.messages - 9223372036854775807,
min.cleanable.dirty.ratio - 0.5, retention.ms - 60480,
segment.jitter.ms - 0}. (kafka.log.LogManager)
[2015-07-07 20:28:17,583] WARN Partition [tempv1,0] on broker 0: No
checkpointed highwatermark is found for partition [tempv1,0]
(kafka.cluster.Partition)
[2015-07-07 20:28:18,388] ERROR [Replica Manager on Broker 0]: Error when
processing fetch request for partition [tempv1,0] offset -1 from consumer
with correlation id 0. Possible cause: Request for offset -1 but we only
have log segments in the range 0 to 0. (kafka.server.ReplicaManager)

On Topic Deletion:
[2015-07-07 20:29:05,075] INFO [ReplicaFetcherManager on broker 0] Removed
fetcher for partitions [tempv1,0] (kafka.server.ReplicaFetcherManager)
[2015-07-07 20:29:05,076] INFO [ReplicaFetcherManager on broker 0] Removed
fetcher for partitions [tempv1,0] (kafka.server.ReplicaFetcherManager)
[2015-07-07 20:29:05,076] WARN [Replica Manager on Broker 0]: Fetch request
with correlation id 0 from client Channel_tempv1_0 on partition [tempv1,0]
failed due to Partition [tempv1,0] doesn't exist on 0
(kafka.server.ReplicaManager)
[2015-07-07 20:29:05,077] INFO Deleting index
/tmp/kafka-logs-server1/tempv1-0/.index
(kafka.log.OffsetIndex)
[2015-07-07 20:29:05,077] INFO Deleted log for partition [tempv1,0] in
/tmp/kafka-logs-server1/tempv1-0. (kafka.log.LogManager)
[2015-07-07 20:29:05,095] INFO Closing socket connection to /10.240.213.24.
(kafka.network.Processor)
[2015-07-07 20:29:05,095] WARN [Replica Manager on Broker 0]: Fetch request
with correlation id 0 from client Channel_tempv1_0 on partition [tempv1,0]
failed due to Partition [tempv1,0] doesn't exist on 0
(kafka.server.ReplicaManager)
[2015-07-07 20:29:05,096] INFO Closing socket connection to /10.240.213.24.
(kafka.network.Processor)
[2015-07-07 20:29:05,097] WARN [Replica Manager on Broker 0]: Fetch request
with correlation id 0 from client Channel_tempv1_0 on partition [tempv1,0]
failed due to Partition [tempv1,0] doesn't exist on 0
(kafka.server.ReplicaManager)
[2015-07-07 20:29:05,098] INFO Closing socket connection to /10.240.213.24.
(kafka.network.Processor)
..
..
At the time of deletion, there was no java client running and accessing
kafka topic.


[jira] [Updated] (KAFKA-2311) Consumer's ensureNotClosed method not thread safe

2015-07-08 Thread Tim Brooks (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tim Brooks updated KAFKA-2311:
--
Attachment: KAFKA-2311.patch

 Consumer's ensureNotClosed method not thread safe
 -

 Key: KAFKA-2311
 URL: https://issues.apache.org/jira/browse/KAFKA-2311
 Project: Kafka
  Issue Type: Bug
  Components: clients
Reporter: Tim Brooks
Assignee: Tim Brooks
 Attachments: KAFKA-2311.patch, KAFKA-2311.patch


 When a call is to the consumer is made, the first check is to see that the 
 consumer is not closed. This variable is not volatile so there is no 
 guarantee previous stores will be visible before a read.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2311) Consumer's ensureNotClosed method not thread safe

2015-07-08 Thread Tim Brooks (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14619682#comment-14619682
 ] 

Tim Brooks commented on KAFKA-2311:
---

Created reviewboard https://reviews.apache.org/r/36341/diff/
 against branch origin/trunk

 Consumer's ensureNotClosed method not thread safe
 -

 Key: KAFKA-2311
 URL: https://issues.apache.org/jira/browse/KAFKA-2311
 Project: Kafka
  Issue Type: Bug
  Components: clients
Reporter: Tim Brooks
Assignee: Tim Brooks
 Attachments: KAFKA-2311.patch, KAFKA-2311.patch


 When a call is to the consumer is made, the first check is to see that the 
 consumer is not closed. This variable is not volatile so there is no 
 guarantee previous stores will be visible before a read.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 34554: Patch for KAFKA-2205

2015-07-08 Thread Jun Rao

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34554/#review91038
---


Thanks for the new patch. Just a couple of minor comments below.


core/src/main/scala/kafka/admin/ConfigCommand.scala (line 123)
https://reviews.apache.org/r/34554/#comment144291

Could we list the valid configs name for each entity-type as we did in 
TopicCommand?



core/src/main/scala/kafka/server/ConfigHandler.scala (line 27)
https://reviews.apache.org/r/34554/#comment144282

Do we need JavaConversions? If this is needed, it would be better to import 
it in the context where the conversion is actually needed.


- Jun Rao


On July 8, 2015, 2:14 a.m., Aditya Auradkar wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/34554/
 ---
 
 (Updated July 8, 2015, 2:14 a.m.)
 
 
 Review request for kafka, Joel Koshy and Jun Rao.
 
 
 Bugs: KAFKA-2205
 https://issues.apache.org/jira/browse/KAFKA-2205
 
 
 Repository: kafka
 
 
 Description
 ---
 
 KAFKA-2205: Summary of changes
 1. Generalized TopicConfigManager to DynamicConfigManager. It is now able to 
 handle multiple types of entities.
 2. Changed format of the notification znode as described in KIP-21
 3. Replaced TopicConfigManager with DynamicConfigManager.
 4. Added new testcases. Existing testcases all pass
 5. Added ConfigCommand to handle all config changes. Eventually this will 
 make calls to the broker once the new API's are built for now it speaks to ZK 
 directly
 6. Addressed all of Jun's comments.
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/admin/AdminUtils.scala 
 f06edf41c732a7b794e496d0048b0ce6f897e72b 
   core/src/main/scala/kafka/admin/ConfigCommand.scala PRE-CREATION 
   core/src/main/scala/kafka/admin/TopicCommand.scala 
 a2ecb9620d647bf8f957a1f00f52896438e804a7 
   core/src/main/scala/kafka/cluster/Partition.scala 
 2649090b6cbf8d442649f19fd7113a30d62bca91 
   core/src/main/scala/kafka/controller/KafkaController.scala 
 20f1499046c768adbcd2bf8ad5969589c8641f34 
   core/src/main/scala/kafka/controller/PartitionLeaderSelector.scala 
 bb6b5c8764522e7947bb08998256ce1deb717c84 
   core/src/main/scala/kafka/controller/TopicDeletionManager.scala 
 64ecb499f24bc801d48f86e1612d927cc08e006d 
   core/src/main/scala/kafka/server/ConfigHandler.scala PRE-CREATION 
   core/src/main/scala/kafka/server/KafkaServer.scala 
 18917bc4464b9403b16d85d20c3fd4c24893d1d3 
   core/src/main/scala/kafka/server/ReplicaFetcherThread.scala 
 c89d00b5976ffa34cafdae261239934b1b917bfe 
   core/src/main/scala/kafka/server/TopicConfigManager.scala 
 01b1b0a8efe6ab3ddc7bf9f1f535b01be4e2e6be 
   core/src/main/scala/kafka/utils/ZkUtils.scala 
 166814c2959a429e20f400d1c0e523090ce37d91 
   core/src/test/scala/unit/kafka/admin/AdminTest.scala 
 252ac813c8df1780c2dc5fa9e698fb43bb6d5cf8 
   core/src/test/scala/unit/kafka/admin/ConfigCommandTest.scala PRE-CREATION 
   core/src/test/scala/unit/kafka/admin/TopicCommandTest.scala 
 dcd69881445c29765f66a7d21d2d18437f4df428 
   core/src/test/scala/unit/kafka/server/DynamicConfigChangeTest.scala 
 8a871cfaf6a534acd1def06a5ac95b5c985b024c 
 
 Diff: https://reviews.apache.org/r/34554/diff/
 
 
 Testing
 ---
 
 1. Added new testcases for new code.
 2. Verified that both topic and client configs can be changed dynamically by 
 starting a local cluster
 
 
 Thanks,
 
 Aditya Auradkar
 




Re: Review Request 36333: Patch for KAFKA-2123

2015-07-08 Thread Ewen Cheslack-Postava

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36333/#review91016
---



clients/src/main/java/org/apache/kafka/clients/consumer/internals/ConsumerNetworkClient.java
 (line 91)
https://reviews.apache.org/r/36333/#comment144254

Is the retryBackoffMs here because that's how long you want to wait for a 
response before we jump back out of the poll, which fails the request and 
triggers another NetworkClient.poll() call, which in turn sends the 
MetadataRequest?

Just trying to figure out the flow because a) the flow is unclear and b) 
this is the only use of metadata in this class. Would this make sense to push 
into NetworkClient, which is the one responsible for making the request anyway? 
I'm not sure it's a good idea since nothing besides this class uses 
NetworkClient anymore, but worth thinking about.



clients/src/main/java/org/apache/kafka/clients/consumer/internals/ConsumerNetworkClient.java
 (line 121)
https://reviews.apache.org/r/36333/#comment144258

Is this the behavior we want? Both timeout and delayedTasks.nextTimeout() 
can be arbitrarily small. For delayedTasks especially, it seems like we're 
tying the failure of requests to unrelated events?

Previously, I thought request failures may happen quickly, but then there 
was a Utils.sleep backoff. I am not seeing how that is handled now? If the 
connection isn't established, won't poll() not send out requests, run 
client.poll(), possibly return very quickly before the connection is 
established/fails, then fail the unsent requests, then just return to the 
caller? And that caller might be one of those while() loops, so it may just 
continue to retry, busy looping while not actually accomplishing anything.

If we're going to introduce queuing of send requests here, it seems like a 
timeout (rather than fast fail + backoff) might be a more natural solution. So 
rather than clearUnsentRequests, it might be clearExpiredRequests.



clients/src/main/java/org/apache/kafka/clients/consumer/internals/Coordinator.java
 (line 113)
https://reviews.apache.org/r/36333/#comment144273

I think this is a hold over from previous code, so feel free to defer to 
another JIRA. The code in KafkaConsumer.committed() implies it should work for 
partitions not assigned to this consumer (and that is useful functionality we 
should probably support). But here, the argument is ignored in the condition 
for the while() loop, which seems error prone. I don't see a way that 
comitted() would cause subscriptions.refreshCommitsNeeded to return true, so I 
think it'll currently fail for unassigned partitions.

For this code, I think it's risky not to specifically be checking the 
requested partitions against the available set in subscriptions unless the 
caller is already guaranteed to do that.



clients/src/main/java/org/apache/kafka/clients/consumer/internals/Coordinator.java
 (line 137)
https://reviews.apache.org/r/36333/#comment144272

This being separated from the rebalance callbacks concerns me a little, 
specifically because we're adding user callbacks and those can potentially make 
additional calls on the consumer. I scanned through all the usages and can 
currently only find one weird edge case that could create incorrect behavior:

1. Do async offset commit
2. During normal poll(), offset commit finishes and callback is invoked.
3. Callback changes calls committed(TopicPartition), which invokes 
refreshCommittedOffsets, which invokes ensureAssignment and handles the 
rebalance without invoking callbacks.

Actually, I think based on my earlier comment this won't currently happen, 
but only because I think committed() won't work properly with non-assigned 
partitions.



clients/src/main/java/org/apache/kafka/clients/consumer/internals/Coordinator.java
 (line 298)
https://reviews.apache.org/r/36333/#comment144274

Why trigger commit refresh here?



clients/src/main/java/org/apache/kafka/clients/consumer/internals/Coordinator.java
 (line 526)
https://reviews.apache.org/r/36333/#comment144281

Stray unnecessary Coordinator. prefix.



clients/src/main/java/org/apache/kafka/clients/consumer/internals/Coordinator.java
 (line 640)
https://reviews.apache.org/r/36333/#comment144280

Is this really the what should be happening? I'm trying to figure out how 
we can even reach this code since RequestFutureCompletionHandler only has 
onComplete - RequestFuture.complete(), which indicates success.

This does need to handle both cases, but can we validly get to onFailure 
the way this class is used, or should this be some stronger assertion?



clients/src/main/java/org/apache/kafka/clients/consumer/internals/DelayedTaskQueue.java
 (line 74)
https://reviews.apache.org/r/36333/#comment144226

Since we bumped to Java 1.7 you 

Re: Build failed in Jenkins: KafkaPreCommit #142

2015-07-08 Thread Ewen Cheslack-Postava
Joel, this looks like it's failing for basically the same file set as the
last fix. We probably want to just ignore all of build/, not just
build/rat/rat-report.xml.

I would say rat may end up being too much of a hassle, but it already
caught invalid license headers in another case...

-Ewen

On Wed, Jul 8, 2015 at 10:39 AM, Apache Jenkins Server 
jenk...@builds.apache.org wrote:

 See https://builds.apache.org/job/KafkaPreCommit/142/changes

 Changes:

 [wangguoz] KAFKA-2308: make MemoryRecords idempotent; reviewed by Guozhang
 Wang

 [cshapi] KAFKA-2316: Drop java 1.6 support; patched by Sriharsha
 Chintalapani reviewed by Ismael Juma and Gwen Shapira

 --
 Started by an SCM change
 Building remotely on H11 (Ubuntu ubuntu) in workspace 
 https://builds.apache.org/job/KafkaPreCommit/ws/
 Cloning the remote Git repository
 Cloning repository https://git-wip-us.apache.org/repos/asf/kafka.git
   git init https://builds.apache.org/job/KafkaPreCommit/ws/ #
 timeout=10
 Fetching upstream changes from
 https://git-wip-us.apache.org/repos/asf/kafka.git
   git --version # timeout=10
   git fetch --tags --progress
 https://git-wip-us.apache.org/repos/asf/kafka.git
 +refs/heads/*:refs/remotes/origin/*
   git config remote.origin.url
 https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
   git config remote.origin.fetch +refs/heads/*:refs/remotes/origin/* #
 timeout=10
   git config remote.origin.url
 https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
 Fetching upstream changes from
 https://git-wip-us.apache.org/repos/asf/kafka.git
   git fetch --tags --progress
 https://git-wip-us.apache.org/repos/asf/kafka.git
 +refs/heads/*:refs/remotes/origin/*
   git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
   git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
 Checking out Revision 7df39e0394a6fd2f26e8f12768ebf7fecd56e3da
 (refs/remotes/origin/trunk)
   git config core.sparsecheckout # timeout=10
   git checkout -f 7df39e0394a6fd2f26e8f12768ebf7fecd56e3da
   git rev-list 4204f4a06bf23160ceec4aa54331db62681bff82 # timeout=10
 Setting
 GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
 [KafkaPreCommit] $ /bin/bash -xe /tmp/hudson301654436593382.sh
 +
 /home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1/bin/gradle
 To honour the JVM settings for this build a new JVM will be forked. Please
 consider using the daemon:
 http://gradle.org/docs/2.1/userguide/gradle_daemon.html.
 Download
 https://repo1.maven.org/maven2/org/ajoberstar/grgit/0.2.3/grgit-0.2.3.pom
 Download
 https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit/3.3.0.201403021825-r/org.eclipse.jgit-3.3.0.201403021825-r.pom
 Download
 https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit-parent/3.3.0.201403021825-r/org.eclipse.jgit-parent-3.3.0.201403021825-r.pom
 Download
 https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit.ui/3.3.0.201403021825-r/org.eclipse.jgit.ui-3.3.0.201403021825-r.pom
 Download
 https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.jsch/0.0.7/jsch.agentproxy.jsch-0.0.7.pom
 Download
 https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy/0.0.7/jsch.agentproxy-0.0.7.pom
 Download
 https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.pageant/0.0.7/jsch.agentproxy.pageant-0.0.7.pom
 Download
 https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.sshagent/0.0.7/jsch.agentproxy.sshagent-0.0.7.pom
 Download
 https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.usocket-jna/0.0.7/jsch.agentproxy.usocket-jna-0.0.7.pom
 Download
 https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.usocket-nc/0.0.7/jsch.agentproxy.usocket-nc-0.0.7.pom
 Download
 https://repo1.maven.org/maven2/com/jcraft/jsch/0.1.46/jsch-0.1.46.pom
 Download
 https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.core/0.0.7/jsch.agentproxy.core-0.0.7.pom
 Download
 https://repo1.maven.org/maven2/org/ajoberstar/grgit/0.2.3/grgit-0.2.3.jar
 Download
 https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit/3.3.0.201403021825-r/org.eclipse.jgit-3.3.0.201403021825-r.jar
 Download
 https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit.ui/3.3.0.201403021825-r/org.eclipse.jgit.ui-3.3.0.201403021825-r.jar
 Download
 https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.jsch/0.0.7/jsch.agentproxy.jsch-0.0.7.jar
 Download
 https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.pageant/0.0.7/jsch.agentproxy.pageant-0.0.7.jar
 Download
 https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.sshagent/0.0.7/jsch.agentproxy.sshagent-0.0.7.jar
 Download
 https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.usocket-jna/0.0.7/jsch.agentproxy.usocket-jna-0.0.7.jar
 Download
 https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.usocket-nc/0.0.7/jsch.agentproxy.usocket-nc-0.0.7.jar
 Download
 

Re: request forwarding in Kafka?

2015-07-08 Thread Qi Xu
ping~

On Tue, Jul 7, 2015 at 6:27 PM, Qi Xu shkir...@gmail.com wrote:

 Hi Kafka folks,
 I'm investigating about using Kafka as a data ingress endpoint for our
 analytic service.
 In our scenario,  Kafka cluster is behind NLB which has a public IP, and
 the data source service  is under another public IP. Basically the data
 source service cannot access Kafka cluster nodes directly. And a request
 hitting the public IP will be routed to any broker node. If Kafka supports
 request forwarding, then our scenario will work.  I noticed there's an
 article 
 http://grokbase.com/t/kafka/users/1441aa74t3/kafka-cluster-behind-a-hardware-load-balancer;
 and Jun Rao clearly stated that Kafka does not do request forwarding.

 I'm wondering if this conclusion is still true for latest Kafka version?
 If that, I'm wondering do you have any plan to add the support of request
 forwarding?

 If not, what's the best practice do you suggest to solve the problem?
 Here're what I think of:
 1)  Make the data source service have direct access to Kafka cluster
 nodes---unfortunately this is impossible for us.
 2)  Make all Kafka cluster nodes have public IP so that they can be
 accessed from our data source service. The concerns are a) we may not have
 enough IPs, b) the security.
 3) Add a proxy service which in the same network of Kafka, to act as data
 ingress service.
 What do you suggest?

 I'm new to Kafka and looking forward your reply. Thanks a lot.


 Tony





Re: Review Request 34554: Patch for KAFKA-2205

2015-07-08 Thread Jun Rao


 On July 7, 2015, 2:18 a.m., Jun Rao wrote:
  Thanks for the patch. A few more comments below.
  
  1. The patch doesn't apply. Could you rebase?
  2. Also, we need the logic to read all existing client configs. Is that in 
  a separate jira?
 
 Aditya Auradkar wrote:
 1. Will do.
 2. Hey Jun - I didn't understand what you meant by read all existing 
 client configs. Can you elaborate? In general, I'm submitting all followup 
 client config changes in subsequent patches to avoid making the patches too 
 large. This patch will basically refactor the code and make it possible to 
 receive client change notifications.

2. When starting up a broker, we need to read all existing client configs into 
ClientIdConfigHandler, right?


- Jun


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34554/#review90622
---


On July 8, 2015, 2:14 a.m., Aditya Auradkar wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/34554/
 ---
 
 (Updated July 8, 2015, 2:14 a.m.)
 
 
 Review request for kafka, Joel Koshy and Jun Rao.
 
 
 Bugs: KAFKA-2205
 https://issues.apache.org/jira/browse/KAFKA-2205
 
 
 Repository: kafka
 
 
 Description
 ---
 
 KAFKA-2205: Summary of changes
 1. Generalized TopicConfigManager to DynamicConfigManager. It is now able to 
 handle multiple types of entities.
 2. Changed format of the notification znode as described in KIP-21
 3. Replaced TopicConfigManager with DynamicConfigManager.
 4. Added new testcases. Existing testcases all pass
 5. Added ConfigCommand to handle all config changes. Eventually this will 
 make calls to the broker once the new API's are built for now it speaks to ZK 
 directly
 6. Addressed all of Jun's comments.
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/admin/AdminUtils.scala 
 f06edf41c732a7b794e496d0048b0ce6f897e72b 
   core/src/main/scala/kafka/admin/ConfigCommand.scala PRE-CREATION 
   core/src/main/scala/kafka/admin/TopicCommand.scala 
 a2ecb9620d647bf8f957a1f00f52896438e804a7 
   core/src/main/scala/kafka/cluster/Partition.scala 
 2649090b6cbf8d442649f19fd7113a30d62bca91 
   core/src/main/scala/kafka/controller/KafkaController.scala 
 20f1499046c768adbcd2bf8ad5969589c8641f34 
   core/src/main/scala/kafka/controller/PartitionLeaderSelector.scala 
 bb6b5c8764522e7947bb08998256ce1deb717c84 
   core/src/main/scala/kafka/controller/TopicDeletionManager.scala 
 64ecb499f24bc801d48f86e1612d927cc08e006d 
   core/src/main/scala/kafka/server/ConfigHandler.scala PRE-CREATION 
   core/src/main/scala/kafka/server/KafkaServer.scala 
 18917bc4464b9403b16d85d20c3fd4c24893d1d3 
   core/src/main/scala/kafka/server/ReplicaFetcherThread.scala 
 c89d00b5976ffa34cafdae261239934b1b917bfe 
   core/src/main/scala/kafka/server/TopicConfigManager.scala 
 01b1b0a8efe6ab3ddc7bf9f1f535b01be4e2e6be 
   core/src/main/scala/kafka/utils/ZkUtils.scala 
 166814c2959a429e20f400d1c0e523090ce37d91 
   core/src/test/scala/unit/kafka/admin/AdminTest.scala 
 252ac813c8df1780c2dc5fa9e698fb43bb6d5cf8 
   core/src/test/scala/unit/kafka/admin/ConfigCommandTest.scala PRE-CREATION 
   core/src/test/scala/unit/kafka/admin/TopicCommandTest.scala 
 dcd69881445c29765f66a7d21d2d18437f4df428 
   core/src/test/scala/unit/kafka/server/DynamicConfigChangeTest.scala 
 8a871cfaf6a534acd1def06a5ac95b5c985b024c 
 
 Diff: https://reviews.apache.org/r/34554/diff/
 
 
 Testing
 ---
 
 1. Added new testcases for new code.
 2. Verified that both topic and client configs can be changed dynamically by 
 starting a local cluster
 
 
 Thanks,
 
 Aditya Auradkar
 




[jira] [Commented] (KAFKA-2319) After controlled shutdown: IllegalStateException: Kafka scheduler has not been started

2015-07-08 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14619671#comment-14619671
 ] 

Jason Gustafson commented on KAFKA-2319:


Seems like there might be a race condition with onControllerResignation getting 
invoked concurrently from shutdown and from another thread (such as in the 
zookeeper session expiration listener). In any case, the implementation of 
KafkaScheduler.shutdown in trunk now appears to explicitly allow multiple 
shutdown attempts (following the patch for KAFKA-1760), so I think this might 
not be an issue any longer.

 After controlled shutdown: IllegalStateException: Kafka scheduler has not 
 been started
 --

 Key: KAFKA-2319
 URL: https://issues.apache.org/jira/browse/KAFKA-2319
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Rosenberg

 Running 0.8.2.1, just saw this today at the end of a controlled shutdown.  It 
 doesn't happen every time, but I've seen it several times:
 {code}
 2015-07-07 18:54:28,424  INFO [Thread-4] server.KafkaServer - [Kafka Server 
 99], Controlled shutdown succeeded
 2015-07-07 18:54:28,425  INFO [Thread-4] network.SocketServer - [Socket 
 Server on Broker 99], Shutting down
 2015-07-07 18:54:28,435  INFO [Thread-4] network.SocketServer - [Socket 
 Server on Broker 99], Shutdown completed
 2015-07-07 18:54:28,435  INFO [Thread-4] server.KafkaRequestHandlerPool - 
 [Kafka Request Handler on Broker 99], shutting down
 2015-07-07 18:54:28,444  INFO [Thread-4] server.KafkaRequestHandlerPool - 
 [Kafka Request Handler on Broker 99], shut down completely
 2015-07-07 18:54:28,649  INFO [Thread-4] server.ReplicaManager - [Replica 
 Manager on Broker 99]: Shut down
 2015-07-07 18:54:28,649  INFO [Thread-4] server.ReplicaFetcherManager - 
 [ReplicaFetcherManager on broker 99] shutting down
 2015-07-07 18:54:28,650  INFO [Thread-4] server.ReplicaFetcherThread - 
 [ReplicaFetcherThread-0-95], Shutting down
 2015-07-07 18:54:28,750  INFO [Thread-4] server.ReplicaFetcherThread - 
 [ReplicaFetcherThread-0-95], Shutdown completed
 2015-07-07 18:54:28,750  INFO [ReplicaFetcherThread-0-95] 
 server.ReplicaFetcherThread - [ReplicaFetcherThread-0-95], Stopped
 2015-07-07 18:54:28,750  INFO [Thread-4] server.ReplicaFetcherThread - 
 [ReplicaFetcherThread-0-98], Shutting down
 2015-07-07 18:54:28,791  INFO [Thread-4] server.ReplicaFetcherThread - 
 [ReplicaFetcherThread-0-98], Shutdown completed
 2015-07-07 18:54:28,791  INFO [ReplicaFetcherThread-0-98] 
 server.ReplicaFetcherThread - [ReplicaFetcherThread-0-98], Stopped
 2015-07-07 18:54:28,791  INFO [Thread-4] server.ReplicaFetcherManager - 
 [ReplicaFetcherManager on broker 99] shutdown completed
 2015-07-07 18:54:28,819  INFO [Thread-4] server.ReplicaManager - [Replica 
 Manager on Broker 99]: Shut down completely
 2015-07-07 18:54:28,826  INFO [Thread-4] log.LogManager - Shutting down.
 2015-07-07 18:54:30,459  INFO [Thread-4] log.LogManager - Shutdown complete.
 2015-07-07 18:54:30,463  WARN [Thread-4] utils.Utils$ - Kafka scheduler has 
 not been started
 java.lang.IllegalStateException: Kafka scheduler has not been started
 at kafka.utils.KafkaScheduler.ensureStarted(KafkaScheduler.scala:114)
 at kafka.utils.KafkaScheduler.shutdown(KafkaScheduler.scala:86)
 at 
 kafka.controller.KafkaController.onControllerResignation(KafkaController.scala:350)
 at 
 kafka.controller.KafkaController.shutdown(KafkaController.scala:664)
 at 
 kafka.server.KafkaServer$$anonfun$shutdown$8.apply$mcV$sp(KafkaServer.scala:285)
 at kafka.utils.Utils$.swallow(Utils.scala:172)
 at kafka.utils.Logging$class.swallowWarn(Logging.scala:92)
 at kafka.utils.Utils$.swallowWarn(Utils.scala:45)
 at kafka.utils.Logging$class.swallow(Logging.scala:94)
 at kafka.utils.Utils$.swallow(Utils.scala:45)
 at kafka.server.KafkaServer.shutdown(KafkaServer.scala:285)
  ...
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2311) Consumer's ensureNotClosed method not thread safe

2015-07-08 Thread Tim Brooks (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14619684#comment-14619684
 ] 

Tim Brooks commented on KAFKA-2311:
---

I removed the unnecessary close check as suggested by Ewen.

 Consumer's ensureNotClosed method not thread safe
 -

 Key: KAFKA-2311
 URL: https://issues.apache.org/jira/browse/KAFKA-2311
 Project: Kafka
  Issue Type: Bug
  Components: clients
Reporter: Tim Brooks
Assignee: Tim Brooks
 Attachments: KAFKA-2311.patch, KAFKA-2311.patch


 When a call is to the consumer is made, the first check is to see that the 
 consumer is not closed. This variable is not volatile so there is no 
 guarantee previous stores will be visible before a read.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Dropping support for Scala 2.9.x

2015-07-08 Thread Joe Stein
We should consider deprecating the scala API so scala version doesn't even
matter anymore for folks... we could even pin the broker to a specific
Scala version too

Of course this makes sense for the Java produce but maybe not just yet for
the consumer, maybe 0.9.0.

Not having to build in 0.8.3 for 2.9 make sense yeah ... folks can still
use their 0.8.2.1 - 2.9 clients with 0.8.3 so there shouldn't be much fus

+1

~ Joe Stein
- - - - - - - - - - - - - - - - - - -
 [image: Logo-Black.jpg]
  http://www.elodina.net
http://www.stealth.ly
- - - - - - - - - - - - - - - - - - -

On Wed, Jul 8, 2015 at 11:07 AM, Ashish Singh asi...@cloudera.com wrote:

 +1

 On Wed, Jul 8, 2015 at 9:52 AM, Guozhang Wang wangg...@gmail.com wrote:

  +1.
 
  Scala 2.9 has been 4 years old and I think it is time to drop it.
 
  On Wed, Jul 8, 2015 at 7:22 AM, Grant Henke ghe...@cloudera.com wrote:
 
   +1 for dropping 2.9
  
   On Wed, Jul 8, 2015 at 9:15 AM, Sriharsha Chintalapani 
 ka...@harsha.io
   wrote:
  
I am +1 on dropping 2.9.x support.
   
Thanks,
Harsha
   
   
On July 8, 2015 at 7:08:12 AM, Ismael Juma (mli...@juma.me.uk)
 wrote:
   
Hi,
   
The responses in this thread were positive, but there weren't many. A
  few
months passed and Sriharsha encouraged me to reopen the thread given
  that
the 2.9 build has been broken for at least a week[1] and no-one
 seemed
  to
notice.
   
Do we want to invest more time so that the 2.9 build continues to
 work
  or
do we want to focus our efforts on 2.10 and 2.11? Please share your
opinion.
   
Best,
Ismael
   
[1] https://issues.apache.org/jira/browse/KAFKA-2325
   
On Fri, Mar 27, 2015 at 2:20 PM, Ismael Juma mli...@juma.me.uk
  wrote:
   
 Hi all,

 The Kafka build currently includes support for Scala 2.9, which
 means
that
 it cannot take advantage of features introduced in Scala 2.10 or
  depend
on
 libraries that require it.

 This restricts the solutions available while trying to solve
 existing
 issues. I was browsing JIRA looking for areas to contribute and I
   quickly
 ran into two issues where this is the case:

 * KAFKA-1351: String.format is very expensive in Scala could be
   solved
 nicely by using the String interpolation feature introduced in
 Scala
2.10.

 * KAFKA-1595: Remove deprecated and slower scala JSON parser from
 kafka.consumer.TopicCount could be solved by using an existing
 JSON
 library, but both jackson-scala and play-json require 2.10
 (argonaut
 supports Scala 2.9, but it brings other dependencies like scalaz).
 We
   can
 workaround this by writing our own code instead of using libraries,
  of
 course, but it's not ideal.

 Other features like Scala Futures and value classes would also be
   useful
 in some situations, I would think (for a more extensive list of new
 features, see

   
  
 
 http://scala-language.1934581.n4.nabble.com/Scala-2-10-0-now-available-td4634126.html
 ).

 Another pain point of supporting 2.9.x is that it doubles the
 number
  of
 build and test configurations required from 2 to 4 (because the
 2.9.x
 series was not necessarily binary compatible).

 A strong argument for maintaining support for 2.9.x was the client
 library, but that has been rewritten in Java.

 It's also worth mentioning that Scala 2.9.1 was released in August
  2011
 (more than 3.5 years ago) and the 2.9.x series hasn't received
  updates
   of
 any sort since early 2013. Scala 2.10.0, in turn, was released in
   January
 2013 (over 2 years ago) and 2.10.5, the last planned release in the
2.10.x
 series, has been recently released (so even 2.10.x won't be
 receiving
 updates any longer).

 All in all, I think it would not be unreasonable to drop support
 for
Scala
 2.9.x in a future release, but I may be missing something. What do
   others
 think?

 Ismael

   
  
  
  
   --
   Grant Henke
   Solutions Consultant | Cloudera
   ghe...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke
  
 
 
 
  --
  -- Guozhang
 



 --

 Regards,
 Ashish



Re: Review Request 36341: Patch for KAFKA-2311

2015-07-08 Thread Sriharsha Chintalapani

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36341/#review91035
---

Ship it!


Ship It!

- Sriharsha Chintalapani


On July 9, 2015, 1:04 a.m., Tim Brooks wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/36341/
 ---
 
 (Updated July 9, 2015, 1:04 a.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-2311
 https://issues.apache.org/jira/browse/KAFKA-2311
 
 
 Repository: kafka
 
 
 Description
 ---
 
 Remove unnecessary close check
 
 
 Diffs
 -
 
   clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
 1f0e51557c4569f0980b72652846b250d00e05d6 
 
 Diff: https://reviews.apache.org/r/36341/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Tim Brooks
 




Re: Review Request 36333: Patch for KAFKA-2123

2015-07-08 Thread Guozhang Wang

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36333/#review91012
---


Browsed through the patch, overall looks very promising.

I am not very clear on a few detailed changes though:

1. The request future adapter / handler modifications.
2. Retry backoff implementation seems not correct.

Could you explain a little bit on these two aspects?


clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerCommitCallback.java
 (line 27)
https://reviews.apache.org/r/36333/#comment144213

You may want to add the committed offset map in the callback since 
otherwise it is unclear which commit it is referring to when triggered.



clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
(lines 542 - 545)
https://reviews.apache.org/r/36333/#comment144216

Is there any particular reason you want to materialize the sessionTimeoutMs 
variable? It seems only referred once at line 545.



clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
(line 765)
https://reviews.apache.org/r/36333/#comment144219

I think KAFKA-1894 is already fixed in this patch + KAFKA-2168?



clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
(line 786)
https://reviews.apache.org/r/36333/#comment144233

Add some comments: re-schedule the commit task for the next commit 
interval?



clients/src/main/java/org/apache/kafka/clients/consumer/internals/ConsumerNetworkClient.java
 (lines 88 - 93)
https://reviews.apache.org/r/36333/#comment144252

This is not introduced in the patch, but I am not sure if this is the right 
way to respect backoff time. For example, if the destination broker is down for 
a short period of time, poll(retryBackoffMs) will immediately return, and hence 
this function will busy triggering poll() and fluding the network with metadata 
requests right?

What we want in this case, is that the consumer should wait for 
retryBackoffMs before retry sending the next metadata request.



clients/src/main/java/org/apache/kafka/clients/consumer/internals/ConsumerNetworkClient.java
 (line 119)
https://reviews.apache.org/r/36333/#comment144250

This function can be private.



clients/src/main/java/org/apache/kafka/clients/consumer/internals/ConsumerNetworkClient.java
 (lines 135 - 138)
https://reviews.apache.org/r/36333/#comment144253

Same as above in awaitMetadataUpdate().



clients/src/main/java/org/apache/kafka/clients/consumer/internals/Coordinator.java
 (line 114)
https://reviews.apache.org/r/36333/#comment144275

We can remove this line since it is checked inside ensureAssignment() 
already.



clients/src/main/java/org/apache/kafka/clients/consumer/internals/Coordinator.java
 (line 137)
https://reviews.apache.org/r/36333/#comment144239

Renaming to ensurePartitionAssigned?



clients/src/main/java/org/apache/kafka/clients/consumer/internals/Coordinator.java
 (line 212)
https://reviews.apache.org/r/36333/#comment144276

There is a potential risk of not aligning the scheduling of heartbeat with 
the discovery of the coordinator. For example, let's say:

1. at t0 we call initHeartbeatTask with interval 100;
2. at t1 the consumer already find the coordinator, but it will not send 
the first HB until t100;
3. at t100 the consumer may find itself already been kicked out of the 
group by the coordinator, and reschedule at t200 and re-join group.
4. at t101 the consumer has re-joined the group, but will not send the HB 
until t200, and so on ..



clients/src/main/java/org/apache/kafka/clients/consumer/internals/Coordinator.java
 (line 280)
https://reviews.apache.org/r/36333/#comment144277

Why we change the behavior to directly throw exception here?



clients/src/main/java/org/apache/kafka/clients/consumer/internals/DelayedTask.java
 (line 22)
https://reviews.apache.org/r/36333/#comment144246

This comment is not correct since the function returns void.



clients/src/main/java/org/apache/kafka/clients/consumer/internals/RequestFutureAdapter.java
 (lines 1 - 43)
https://reviews.apache.org/r/36333/#comment144278

Not clear why you want to convert the future type in this adapter, can you 
elaborate a bit?


- Guozhang Wang


On July 8, 2015, 9:19 p.m., Jason Gustafson wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/36333/
 ---
 
 (Updated July 8, 2015, 9:19 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-2123
 https://issues.apache.org/jira/browse/KAFKA-2123
 
 
 Repository: kafka
 
 
 Description
 ---
 
 KAFKA-2123; resolve problems from rebase
 
 
 Diffs
 -
 
   clients/src/main/java/org/apache/kafka/clients/consumer/Consumer.java 
 

Review Request 36341: Patch for KAFKA-2311

2015-07-08 Thread Tim Brooks

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36341/
---

Review request for kafka.


Bugs: KAFKA-2311
https://issues.apache.org/jira/browse/KAFKA-2311


Repository: kafka


Description
---

Remove unnecessary close check


Diffs
-

  clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
1f0e51557c4569f0980b72652846b250d00e05d6 

Diff: https://reviews.apache.org/r/36341/diff/


Testing
---


Thanks,

Tim Brooks



Build failed in Jenkins: KafkaPreCommit #142

2015-07-08 Thread Joel Koshy
Agreed that we should change it to exclude build/** - however, build 142
failure does not seem to be rat-related is it? E.g., compare the console
output with an earlier build (138 I think). Sorry I can't verify/debug right
now, but can look tomorrow.

On Wednesday, July 8, 2015, Ewen Cheslack-Postava e...@confluent.io
javascript:_e(%7B%7D,'cvml','e...@confluent.io'); wrote:

 Joel, this looks like it's failing for basically the same file set as the
 last fix. We probably want to just ignore all of build/, not just
 build/rat/rat-report.xml.

 I would say rat may end up being too much of a hassle, but it already
 caught invalid license headers in another case...

 -Ewen

 On Wed, Jul 8, 2015 at 10:39 AM, Apache Jenkins Server 
 jenk...@builds.apache.org wrote:

  See https://builds.apache.org/job/KafkaPreCommit/142/changes
 
  Changes:
 
  [wangguoz] KAFKA-2308: make MemoryRecords idempotent; reviewed by
 Guozhang
  Wang
 
  [cshapi] KAFKA-2316: Drop java 1.6 support; patched by Sriharsha
  Chintalapani reviewed by Ismael Juma and Gwen Shapira
 
  --
  Started by an SCM change
  Building remotely on H11 (Ubuntu ubuntu) in workspace 
  https://builds.apache.org/job/KafkaPreCommit/ws/
  Cloning the remote Git repository
  Cloning repository https://git-wip-us.apache.org/repos/asf/kafka.git
git init https://builds.apache.org/job/KafkaPreCommit/ws/ #
  timeout=10
  Fetching upstream changes from
  https://git-wip-us.apache.org/repos/asf/kafka.git
git --version # timeout=10
git fetch --tags --progress
  https://git-wip-us.apache.org/repos/asf/kafka.git
  +refs/heads/*:refs/remotes/origin/*
git config remote.origin.url
  https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
git config remote.origin.fetch +refs/heads/*:refs/remotes/origin/* #
  timeout=10
git config remote.origin.url
  https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
  Fetching upstream changes from
  https://git-wip-us.apache.org/repos/asf/kafka.git
git fetch --tags --progress
  https://git-wip-us.apache.org/repos/asf/kafka.git
  +refs/heads/*:refs/remotes/origin/*
git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
  Checking out Revision 7df39e0394a6fd2f26e8f12768ebf7fecd56e3da
  (refs/remotes/origin/trunk)
git config core.sparsecheckout # timeout=10
git checkout -f 7df39e0394a6fd2f26e8f12768ebf7fecd56e3da
git rev-list 4204f4a06bf23160ceec4aa54331db62681bff82 # timeout=10
  Setting
 
 GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
  [KafkaPreCommit] $ /bin/bash -xe /tmp/hudson301654436593382.sh
  +
 
 /home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1/bin/gradle
  To honour the JVM settings for this build a new JVM will be forked.
 Please
  consider using the daemon:
  http://gradle.org/docs/2.1/userguide/gradle_daemon.html.
  Download
 
 https://repo1.maven.org/maven2/org/ajoberstar/grgit/0.2.3/grgit-0.2.3.pom
  Download
 
 https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit/3.3.0.201403021825-r/org.eclipse.jgit-3.3.0.201403021825-r.pom
  Download
 
 https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit-parent/3.3.0.201403021825-r/org.eclipse.jgit-parent-3.3.0.201403021825-r.pom
  Download
 
 https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit.ui/3.3.0.201403021825-r/org.eclipse.jgit.ui-3.3.0.201403021825-r.pom
  Download
 
 https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.jsch/0.0.7/jsch.agentproxy.jsch-0.0.7.pom
  Download
 
 https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy/0.0.7/jsch.agentproxy-0.0.7.pom
  Download
 
 https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.pageant/0.0.7/jsch.agentproxy.pageant-0.0.7.pom
  Download
 
 https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.sshagent/0.0.7/jsch.agentproxy.sshagent-0.0.7.pom
  Download
 
 https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.usocket-jna/0.0.7/jsch.agentproxy.usocket-jna-0.0.7.pom
  Download
 
 https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.usocket-nc/0.0.7/jsch.agentproxy.usocket-nc-0.0.7.pom
  Download
  https://repo1.maven.org/maven2/com/jcraft/jsch/0.1.46/jsch-0.1.46.pom
  Download
 
 https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.core/0.0.7/jsch.agentproxy.core-0.0.7.pom
  Download
 
 https://repo1.maven.org/maven2/org/ajoberstar/grgit/0.2.3/grgit-0.2.3.jar
  Download
 
 https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit/3.3.0.201403021825-r/org.eclipse.jgit-3.3.0.201403021825-r.jar
  Download
 
 https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit.ui/3.3.0.201403021825-r/org.eclipse.jgit.ui-3.3.0.201403021825-r.jar
  Download
 
 https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.jsch/0.0.7/jsch.agentproxy.jsch-0.0.7.jar
  Download