[jira] [Updated] (KAFKA-1890) Fix bug preventing Mirror Maker from successful rebalance.

2015-01-22 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1890:
-
Priority: Blocker  (was: Major)

> Fix bug preventing Mirror Maker from successful rebalance.
> --
>
> Key: KAFKA-1890
> URL: https://issues.apache.org/jira/browse/KAFKA-1890
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
>Priority: Blocker
> Fix For: 0.8.3
>
> Attachments: KAFKA-1890.patch
>
>
> Follow-up patch for KAFKA-1650



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1890) Follow-up patch for KAFKA-1650

2015-01-22 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1890:
-
Description: (was: Fix bug preventing Mirror Maker from successful 
rebalance.)

> Follow-up patch for KAFKA-1650
> --
>
> Key: KAFKA-1890
> URL: https://issues.apache.org/jira/browse/KAFKA-1890
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.8.3
>
> Attachments: KAFKA-1890.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1890) Fix bug preventing Mirror Maker from successful rebalance.

2015-01-22 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1890:
-
Description: Follow-up patch for KAFKA-1650

> Fix bug preventing Mirror Maker from successful rebalance.
> --
>
> Key: KAFKA-1890
> URL: https://issues.apache.org/jira/browse/KAFKA-1890
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.8.3
>
> Attachments: KAFKA-1890.patch
>
>
> Follow-up patch for KAFKA-1650



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1890) Fix bug preventing Mirror Maker from successful rebalance.

2015-01-22 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1890:
-
Summary: Fix bug preventing Mirror Maker from successful rebalance.  (was: 
Follow-up patch for KAFKA-1650)

> Fix bug preventing Mirror Maker from successful rebalance.
> --
>
> Key: KAFKA-1890
> URL: https://issues.apache.org/jira/browse/KAFKA-1890
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.8.3
>
> Attachments: KAFKA-1890.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1890) Fix bug preventing Mirror Maker from successful rebalance.

2015-01-22 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1890:
-
Fix Version/s: 0.8.3

> Fix bug preventing Mirror Maker from successful rebalance.
> --
>
> Key: KAFKA-1890
> URL: https://issues.apache.org/jira/browse/KAFKA-1890
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.8.3
>
> Attachments: KAFKA-1890.patch
>
>
> Follow-up patch for KAFKA-1650



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1889) Refactor shell wrapper scripts

2015-01-22 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14288152#comment-14288152
 ] 

Joe Stein commented on KAFKA-1889:
--

Makes sense. How do we know this works well for rpm and deb? 

What about having rpm and deb script for folks to make packages that wrap what 
you did?

If we introduce something that doesn't do that we will just generate more 
questions and issues and have to support that. It would be great to best 
minimize those things with the changes.

> Refactor shell wrapper scripts
> --
>
> Key: KAFKA-1889
> URL: https://issues.apache.org/jira/browse/KAFKA-1889
> Project: Kafka
>  Issue Type: Improvement
>  Components: packaging
>Reporter: Francois Saint-Jacques
>Assignee: Francois Saint-Jacques
>Priority: Minor
> Attachments: refactor-scripts-v1.patch, refactor-scripts-v2.patch
>
>
> Shell scripts in bin/ need love.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1889) Refactor shell wrapper scripts

2015-01-22 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14288061#comment-14288061
 ] 

Joe Stein commented on KAFKA-1889:
--

Thanks for the patch [~fsaintjacques]. What is the motivation for this? Have 
you seen the work on the new CLI 
https://issues.apache.org/jira/browse/KAFKA-1694 and thought how that might be 
used moving forward?

> Refactor shell wrapper scripts
> --
>
> Key: KAFKA-1889
> URL: https://issues.apache.org/jira/browse/KAFKA-1889
> Project: Kafka
>  Issue Type: Improvement
>  Components: packaging
>Reporter: Francois Saint-Jacques
>Assignee: Francois Saint-Jacques
>Priority: Minor
> Attachments: refactor-scripts-v1.patch, refactor-scripts-v2.patch
>
>
> Shell scripts in bin/ need love.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1845) KafkaConfig should use ConfigDef

2015-01-22 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1845:
-
Fix Version/s: 0.8.3

> KafkaConfig should use ConfigDef 
> -
>
> Key: KAFKA-1845
> URL: https://issues.apache.org/jira/browse/KAFKA-1845
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Gwen Shapira
>Assignee: Andrii Biletskyi
>  Labels: newbie
> Fix For: 0.8.3
>
> Attachments: KAFKA-1845.patch
>
>
> ConfigDef is already used for the new producer and for TopicConfig. 
> Will be nice to standardize and use one configuration and validation library 
> across the board.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1367) Broker topic metadata not kept in sync with ZooKeeper

2015-01-21 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1367:
-
Fix Version/s: 0.8.3

> Broker topic metadata not kept in sync with ZooKeeper
> -
>
> Key: KAFKA-1367
> URL: https://issues.apache.org/jira/browse/KAFKA-1367
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.0, 0.8.1
>Reporter: Ryan Berdeen
>  Labels: newbie++
> Fix For: 0.8.3
>
> Attachments: KAFKA-1367.txt
>
>
> When a broker is restarted, the topic metadata responses from the brokers 
> will be incorrect (different from ZooKeeper) until a preferred replica leader 
> election.
> In the metadata, it looks like leaders are correctly removed from the ISR 
> when a broker disappears, but followers are not. Then, when a broker 
> reappears, the ISR is never updated.
> I used a variation of the Vagrant setup created by Joe Stein to reproduce 
> this with latest from the 0.8.1 branch: 
> https://github.com/also/kafka/commit/dba36a503a5e22ea039df0f9852560b4fb1e067c



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1688) Add authorization interface and naive implementation

2015-01-19 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14283196#comment-14283196
 ] 

Joe Stein commented on KAFKA-1688:
--

you can create a child page or such, just checked your perms looks ok to-do so

> Add authorization interface and naive implementation
> 
>
> Key: KAFKA-1688
> URL: https://issues.apache.org/jira/browse/KAFKA-1688
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Jay Kreps
>Assignee: Sriharsha Chintalapani
> Fix For: 0.8.3
>
>
> Add a PermissionManager interface as described here:
> https://cwiki.apache.org/confluence/display/KAFKA/Security
> (possibly there is a better name?)
> Implement calls to the PermissionsManager in KafkaApis for the main requests 
> (FetchRequest, ProduceRequest, etc). We will need to add a new error code and 
> exception to the protocol to indicate "permission denied".
> Add a server configuration to give the class you want to instantiate that 
> implements that interface. That class can define its own configuration 
> properties from the main config file.
> Provide a simple implementation of this interface which just takes a user and 
> ip whitelist and permits those in either of the whitelists to do anything, 
> and denies all others.
> Rather than writing an integration test for this class we can probably just 
> use this class for the TLS and SASL authentication testing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1688) Add authorization interface and naive implementation

2015-01-19 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14283108#comment-14283108
 ] 

Joe Stein commented on KAFKA-1688:
--

what is your confluence username so I can grant you permission?

> Add authorization interface and naive implementation
> 
>
> Key: KAFKA-1688
> URL: https://issues.apache.org/jira/browse/KAFKA-1688
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Jay Kreps
>Assignee: Sriharsha Chintalapani
> Fix For: 0.8.3
>
>
> Add a PermissionManager interface as described here:
> https://cwiki.apache.org/confluence/display/KAFKA/Security
> (possibly there is a better name?)
> Implement calls to the PermissionsManager in KafkaApis for the main requests 
> (FetchRequest, ProduceRequest, etc). We will need to add a new error code and 
> exception to the protocol to indicate "permission denied".
> Add a server configuration to give the class you want to instantiate that 
> implements that interface. That class can define its own configuration 
> properties from the main config file.
> Provide a simple implementation of this interface which just takes a user and 
> ip whitelist and permits those in either of the whitelists to do anything, 
> and denies all others.
> Rather than writing an integration test for this class we can probably just 
> use this class for the TLS and SASL authentication testing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1876) pom file for scala 2.11 should reference a specific version

2015-01-19 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1876:
-
Attachment: twoeleven.tgz

gradle works yes however maven fails since it doesn't have unicorns and pixie 
dust built in. 

I attached the project i used to test gradle and maven w/ 2.11 support.

We need this patch otherwise anyone using pom w/ kafka wanting 2.11 won't work

patch LGTM +1

> pom file for scala 2.11 should reference a specific version
> ---
>
> Key: KAFKA-1876
> URL: https://issues.apache.org/jira/browse/KAFKA-1876
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2
>Reporter: Jun Rao
>Assignee: Jun Rao
>Priority: Blocker
> Fix For: 0.8.2
>
> Attachments: kafka-1876.patch, twoeleven.tgz
>
>
> Currently, the pom file specifies the following scala dependency for 2.11.
> 
>   org.scala-lang
>   scala-library
>   2.11
>   compile
> 
> However, there is no 2.11 in maven central (there are only 2.11.1, 2.11.2, 
> etc).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1856) Add PreCommit Patch Testing

2015-01-17 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14281546#comment-14281546
 ] 

Joe Stein commented on KAFKA-1856:
--

The way we have been developing since from incubation is on trunk. When we are 
ready (or close to ready) we make a branch for that release off of trunk. This 
allows new work to occur on trunk and any blockers to be double committed to 
branch release and trunk. If we ever need (as was the case with 0.8.0) to make 
a an update on that branched release we update on that branch (again double 
committing to trunk). When we actually do a final release we have it on a tag 
e.g. 0.8.1.0, 0.8.1.1, 0.8.2.0 etc so as to not conflict with the branch name 
and give the branch the ability for minor blocker fixes to get out.

> Add PreCommit Patch Testing
> ---
>
> Key: KAFKA-1856
> URL: https://issues.apache.org/jira/browse/KAFKA-1856
> Project: Kafka
>  Issue Type: Task
>Reporter: Ashish Kumar Singh
>Assignee: Ashish Kumar Singh
> Attachments: KAFKA-1856.patch
>
>
> h1. Kafka PreCommit Patch Testing - *Don't wait for it to break*
> h2. Motivation
> *With great power comes great responsibility* - Uncle Ben. As Kafka user list 
> is growing, mechanism to ensure quality of the product is required. Quality 
> becomes hard to measure and maintain in an open source project, because of a 
> wide community of contributors. Luckily, Kafka is not the first open source 
> project and can benefit from learnings of prior projects.
> PreCommit tests are the tests that are run for each patch that gets attached 
> to an open JIRA. Based on tests results, test execution framework, test bot, 
> +1 or -1 the patch. Having PreCommit tests take the load off committers to 
> look at or test each patch.
> h2. Tests in Kafka
> h3. Unit and Integraiton Tests
> [Unit and Integration 
> tests|https://cwiki.apache.org/confluence/display/KAFKA/Kafka+0.9+Unit+and+Integration+Tests]
>  are cardinal to help contributors to avoid breaking existing functionalities 
> while adding new functionalities or fixing older ones. These tests, atleast 
> the ones relevant to the changes, must be run by contributors before 
> attaching a patch to a JIRA.
> h3. System Tests
> [System 
> tests|https://cwiki.apache.org/confluence/display/KAFKA/Kafka+System+Tests] 
> are much wider tests that, unlike unit tests, focus on end-to-end scenarios 
> and not some specific method or class.
> h2. Apache PreCommit tests
> Apache provides a mechanism to automatically build a project and run a series 
> of tests whenever a patch is uploaded to a JIRA. Based on test execution, the 
> test framework will comment with a +1 or -1 on the JIRA.
> You can read more about the framework here:
> http://wiki.apache.org/general/PreCommitBuilds
> h2. Plan
> # Create a test-patch.py script (similar to the one used in Flume, Sqoop and 
> other projects) that will take a jira as a parameter, apply on the 
> appropriate branch, build the project, run tests and report results. This 
> script should be committed into the Kafka code-base. To begin with, this will 
> only run unit tests. We can add code sanity checks, system_tests, etc in the 
> future.
> # Create a jenkins job for running the test (as described in 
> http://wiki.apache.org/general/PreCommitBuilds) and validate that it works 
> manually. This must be done by a committer with Jenkins access.
> # Ask someone with access to https://builds.apache.org/job/PreCommit-Admin/ 
> to add Kafka to the list of projects PreCommit-Admin triggers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1845) KafkaConfig should use ConfigDef

2015-01-17 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1845:
-
Reviewer: Gwen Shapira
Assignee: Andrii Biletskyi

Andrii, lets get this patch out of the way so that we can get back to the CLI & 
Global config changes since this is direct blocker of those please.

> KafkaConfig should use ConfigDef 
> -
>
> Key: KAFKA-1845
> URL: https://issues.apache.org/jira/browse/KAFKA-1845
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Gwen Shapira
>Assignee: Andrii Biletskyi
>  Labels: newbie
>
> ConfigDef is already used for the new producer and for TopicConfig. 
> Will be nice to standardize and use one configuration and validation library 
> across the board.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1856) Add PreCommit Patch Testing

2015-01-16 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1856:
-
Reviewer: Gwen Shapira

once this is working let me know and I can create the Jenkins build for 
whatever we need

> Add PreCommit Patch Testing
> ---
>
> Key: KAFKA-1856
> URL: https://issues.apache.org/jira/browse/KAFKA-1856
> Project: Kafka
>  Issue Type: Task
>Reporter: Ashish Kumar Singh
>Assignee: Ashish Kumar Singh
> Attachments: KAFKA-1856.patch
>
>
> h1. Kafka PreCommit Patch Testing - *Don't wait for it to break*
> h2. Motivation
> *With great power comes great responsibility* - Uncle Ben. As Kafka user list 
> is growing, mechanism to ensure quality of the product is required. Quality 
> becomes hard to measure and maintain in an open source project, because of a 
> wide community of contributors. Luckily, Kafka is not the first open source 
> project and can benefit from learnings of prior projects.
> PreCommit tests are the tests that are run for each patch that gets attached 
> to an open JIRA. Based on tests results, test execution framework, test bot, 
> +1 or -1 the patch. Having PreCommit tests take the load off committers to 
> look at or test each patch.
> h2. Tests in Kafka
> h3. Unit and Integraiton Tests
> [Unit and Integration 
> tests|https://cwiki.apache.org/confluence/display/KAFKA/Kafka+0.9+Unit+and+Integration+Tests]
>  are cardinal to help contributors to avoid breaking existing functionalities 
> while adding new functionalities or fixing older ones. These tests, atleast 
> the ones relevant to the changes, must be run by contributors before 
> attaching a patch to a JIRA.
> h3. System Tests
> [System 
> tests|https://cwiki.apache.org/confluence/display/KAFKA/Kafka+System+Tests] 
> are much wider tests that, unlike unit tests, focus on end-to-end scenarios 
> and not some specific method or class.
> h2. Apache PreCommit tests
> Apache provides a mechanism to automatically build a project and run a series 
> of tests whenever a patch is uploaded to a JIRA. Based on test execution, the 
> test framework will comment with a +1 or -1 on the JIRA.
> You can read more about the framework here:
> http://wiki.apache.org/general/PreCommitBuilds
> h2. Plan
> # Create a test-patch.py script (similar to the one used in Flume, Sqoop and 
> other projects) that will take a jira as a parameter, apply on the 
> appropriate branch, build the project, run tests and report results. This 
> script should be committed into the Kafka code-base. To begin with, this will 
> only run unit tests. We can add code sanity checks, system_tests, etc in the 
> future.
> # Create a jenkins job for running the test (as described in 
> http://wiki.apache.org/general/PreCommitBuilds) and validate that it works 
> manually. This must be done by a committer with Jenkins access.
> # Ask someone with access to https://builds.apache.org/job/PreCommit-Admin/ 
> to add Kafka to the list of projects PreCommit-Admin triggers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1810) Add IP Filtering / Whitelists-Blacklists

2015-01-15 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1810:
-
Reviewer: Gwen Shapira

> Add IP Filtering / Whitelists-Blacklists 
> -
>
> Key: KAFKA-1810
> URL: https://issues.apache.org/jira/browse/KAFKA-1810
> Project: Kafka
>  Issue Type: New Feature
>  Components: core, network
>Reporter: Jeff Holoman
>Assignee: Jeff Holoman
>Priority: Minor
> Fix For: 0.8.3
>
> Attachments: KAFKA-1810.patch, KAFKA-1810_2015-01-15_19:47:14.patch
>
>
> While longer-term goals of security in Kafka are on the roadmap there exists 
> some value for the ability to restrict connection to Kafka brokers based on 
> IP address. This is not intended as a replacement for security but more of a 
> precaution against misconfiguration and to provide some level of control to 
> Kafka administrators about who is reading/writing to their cluster.
> 1) In some organizations software administration vs o/s systems 
> administration and network administration is disjointed and not well 
> choreographed. Providing software administrators the ability to configure 
> their platform relatively independently (after initial configuration) from 
> Systems administrators is desirable.
> 2) Configuration and deployment is sometimes error prone and there are 
> situations when test environments could erroneously read/write to production 
> environments
> 3) An additional precaution against reading sensitive data is typically 
> welcomed in most large enterprise deployments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-1697) remove code related to ack>1 on the broker

2015-01-15 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14279037#comment-14279037
 ] 

Joe Stein edited comment on KAFKA-1697 at 1/15/15 6:21 PM:
---

With this patch I think we should change the existing functionality with > 1 
update to start to LOG as a WARN in the Broker (so it gets people attention to 
stop using ack >1) but keep everything else the same... the new version of the 
request (with a match/case) should do the new functionality and we support both.


was (Author: joestein):
With this patch I think we should change the existing functionality with 1 
update to start to LOG as a WARN in the Broker (so it gets people attention to 
stop using ack >1) but keep everything else the same... the new version of the 
request (with a match/case) should do the new functionality.

> remove code related to ack>1 on the broker
> --
>
> Key: KAFKA-1697
> URL: https://issues.apache.org/jira/browse/KAFKA-1697
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jun Rao
>Assignee: Gwen Shapira
> Fix For: 0.8.3
>
> Attachments: KAFKA-1697.patch, KAFKA-1697_2015-01-14_15:41:37.patch
>
>
> We removed the ack>1 support from the producer client in kafka-1555. We can 
> completely remove the code in the broker that supports ack>1.
> Also, we probably want to make NotEnoughReplicasAfterAppend a non-retriable 
> exception and let the client decide what to do.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1697) remove code related to ack>1 on the broker

2015-01-15 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14279037#comment-14279037
 ] 

Joe Stein commented on KAFKA-1697:
--

With this patch I think we should change the existing functionality with 1 
update to start to LOG as a WARN in the Broker (so it gets people attention to 
stop using ack >1) but keep everything else the same... the new version of the 
request (with a match/case) should do the new functionality.

> remove code related to ack>1 on the broker
> --
>
> Key: KAFKA-1697
> URL: https://issues.apache.org/jira/browse/KAFKA-1697
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jun Rao
>Assignee: Gwen Shapira
> Fix For: 0.8.3
>
> Attachments: KAFKA-1697.patch, KAFKA-1697_2015-01-14_15:41:37.patch
>
>
> We removed the ack>1 support from the producer client in kafka-1555. We can 
> completely remove the code in the broker that supports ack>1.
> Also, we probably want to make NotEnoughReplicasAfterAppend a non-retriable 
> exception and let the client decide what to do.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1760) Implement new consumer client

2015-01-12 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1760:
-
Fix Version/s: 0.8.3

> Implement new consumer client
> -
>
> Key: KAFKA-1760
> URL: https://issues.apache.org/jira/browse/KAFKA-1760
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Jay Kreps
>Assignee: Jay Kreps
> Fix For: 0.8.3
>
> Attachments: KAFKA-1760.patch, KAFKA-1760_2015-01-11_16:57:15.patch
>
>
> Implement a consumer client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1850) Failed reassignment leads to additional replica

2015-01-12 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14274657#comment-14274657
 ] 

Joe Stein commented on KAFKA-1850:
--

The reassignment isn't going to be able to finish until the new replica(s) 
is/are caught up.

Are all of your brokers up? How much data is in your partitions? 

ERROR: Assigned replicas (2,1,0) don't match the list of replicas for 
reassignment (2,1) for partition [testingTopic,9]

This means that replica #1 has not replicated everything and caught up to #2 
yet (the leader).

It is possible that the reassignment is still running but the replicas are just 
not catching up with the leader (so it is not finishing ever).  This could be 
due to data size and volume and threads (just can't keep up) with the broker 
configuration. This could be due to a different message max size on broker #0 
and #2 than #1 so you have a message that can't be fetched so it won't catch up.

Can you confirm, is there data in the partitions on the new broker? Do you see 
new data coming (you can look on disk at the directories)? 

It could be wedged/stuck and just not finishing.

One option is to restart the leader for each partition failing. I have seen 
that solve this issue before but I don't know if the problem you are having is 
in fact a bug or just the brokers simply not catching up.  It could be the 
controller also, so restarting broker#2 may end up being what you might have 
to-do to fix this.

I would investigate first to confirm that the issue is simply a problem of the 
new broker just not able to catch up and trying to resolve that before 
restarting brokers that are the leader and live as restarting them could have a 
negative impact to your cluster.


> Failed reassignment leads to additional replica
> ---
>
> Key: KAFKA-1850
> URL: https://issues.apache.org/jira/browse/KAFKA-1850
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.8.1
> Environment: CentOS  (Linux Kernel 2.6.32-71.el6.x86_64 )
>Reporter: Alex Tian
>Assignee: Neha Narkhede
>Priority: Minor
>  Labels: newbie
> Attachments: Track on testingTopic-9's movement.txt, 
> track_on_testingTopic-9_movement_on_the_following_2_days.txt
>
>   Original Estimate: 504h
>  Remaining Estimate: 504h
>
> When I start a topic reassignment (Totally 36 partitions) in my Kafka 
> Cluster, 24 partitions succeeded and 12 ones failed. However, the 12 failed 
> partitions have more replicas. I think the reason is that  AR still consists 
> of RAR and OAR although the reassignment for the partition failed. Could we 
> regard this problem as a bug? Quite sorry if any mistake in my question, 
> since I am a beginner for Kafka.
> This is the output from operation: 
> 1. alex-topics-to-move.json:
> {"topics": [{"topic": "testingTopic"}],
>  "version":1
> }
> 2. Generate a reassignment plan
> $./kafka-reassign-partitions.sh  --generate  --broker-list 0,1,2,3,4 
> --topics-to-move-json-file ./alex-topics-to-move.json   --zookeeper 
> 192.168.112.95:2181,192.168.112.96:2181,192.168.112.97:2181,192.168.112.98:2181,192.168.112.99:2181
> Current partition replica assignment
> {"version":1,
>  "partitions":[   {"topic":"testingTopic","partition":27,"replicas":[0,2]},
>
> {"topic":"testingTopic","partition":1,"replicas":[1,2]},
>   {"topic":"testingTopic","partition":12,"replicas":[0,1]},
>   {"topic":"testingTopic","partition":6,"replicas":[0,1]},
>   {"topic":"testingTopic","partition":16,"replicas":[1,0]},
>   {"topic":"testingTopic","partition":32,"replicas":[2,0]},
>   {"topic":"testingTopic","partition":18,"replicas":[0,1]},
>   {"topic":"testingTopic","partition":31,"replicas":[1,2]},
>   {"topic":"testingTopic","partition":9,"replicas":[0,2]},
>   {"topic":"testingTopic","partition":23,"replicas":[2,1]},
>   {"topic":"testingTopic","partition":19,"replicas":[1,2]},
>   {"topic":"testingTopic","partition":34,"replicas":[1,0]},
>   {"topic":"testingTopic","partition":17,"replicas":[2,1]},
>   {"topic":"testingTopic","partition":7,"replicas":[1,2]},
>   {"topic":"testingTopic","partition":20,"replicas":[2,0]},
>   {"topic":"testingTopic","partition":8,"replicas":[2,0]},
>   {"topic":"testingTopic","partition":11,"replicas":[2,1]},
>   {"topic":"testingTopic","partition":3,"replicas":[0,2]},
>   {"topic":"testingTopic","partition":30,"replicas":[0,1]},
>   {"topic":"testingTopic","partition":35,"replicas":[2,1]},
>   {"to

[jira] [Updated] (KAFKA-1835) Kafka new producer needs options to make blocking behavior explicit

2015-01-12 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1835:
-
Fix Version/s: 0.8.2

> Kafka new producer needs options to make blocking behavior explicit
> ---
>
> Key: KAFKA-1835
> URL: https://issues.apache.org/jira/browse/KAFKA-1835
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 0.8.2, 0.8.3, 0.9.0
>Reporter: Paul Pearcy
> Fix For: 0.8.2
>
> Attachments: KAFKA-1835-New-producer--blocking_v0.patch
>
>   Original Estimate: 504h
>  Remaining Estimate: 504h
>
> The new (0.8.2 standalone) producer will block the first time it attempts to 
> retrieve metadata for a topic. This is not the desired behavior in some use 
> cases where async non-blocking guarantees are required and message loss is 
> acceptable in known cases. Also, most developers will assume an API that 
> returns a future is safe to call in a critical request path. 
> Discussing on the mailing list, the most viable option is to have the 
> following settings:
>  pre.initialize.topics=x,y,z
>  pre.initialize.timeout=x
>  
> This moves potential blocking to the init of the producer and outside of some 
> random request. The potential will still exist for blocking in a corner case 
> where connectivity with Kafka is lost and a topic not included in pre-init 
> has a message sent for the first time. 
> There is the question of what to do when initialization fails. There are a 
> couple of options that I'd like available:
> - Fail creation of the client 
> - Fail all sends until the meta is available 
> Open to input on how the above option should be expressed. 
> It is also worth noting more nuanced solutions exist that could work without 
> the extra settings, they just end up having extra complications and at the 
> end of the day not adding much value. For instance, the producer could accept 
> and queue messages(note: more complicated than I am making it sound due to 
> storing all accepted messages in pre-partitioned compact binary form), but 
> you're still going to be forced to choose to either start blocking or 
> dropping messages at some point. 
> I have some test cases I am going to port over to the Kafka producer 
> integration ones and start from there. My current impl is in scala, but 
> porting to Java shouldn't be a big deal (was using a promise to track init 
> status, but will likely need to make that an atomic bool). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1855) Topic unusable after unsuccessful UpdateMetadataRequest

2015-01-10 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1855:
-
Fix Version/s: 0.8.2

> Topic unusable after unsuccessful UpdateMetadataRequest
> ---
>
> Key: KAFKA-1855
> URL: https://issues.apache.org/jira/browse/KAFKA-1855
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.8.2
>Reporter: Henri Pihkala
> Fix For: 0.8.2
>
>
> Sometimes, seemingly randomly, topic creation/initialization might fail with 
> the following lines in controller.log. Other logs show no errors. When this 
> happens, the topic is unusable (gives UnknownTopicOrPartition for all 
> requests).
> For me this happens 5-10% of the time. Feels like it's more likely to happen 
> if there is time between topic creations. Observed on 0.8.2-beta, have not 
> tried previous versions.
> [2015-01-09 16:15:27,153] WARN [Controller-0-to-broker-0-send-thread], 
> Controller 0 fails to send a request to broker 
> id:0,host:192.168.10.21,port:9092 (kafka.controller.RequestSendThread)
> java.io.EOFException: Received -1 when reading from channel, socket has 
> likely been closed.
>   at kafka.utils.Utils$.read(Utils.scala:381)
>   at 
> kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:54)
>   at kafka.network.Receive$class.readCompletely(Transmission.scala:56)
>   at 
> kafka.network.BoundedByteBufferReceive.readCompletely(BoundedByteBufferReceive.scala:29)
>   at kafka.network.BlockingChannel.receive(BlockingChannel.scala:108)
>   at 
> kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:146)
>   at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
> [2015-01-09 16:15:27,156] ERROR [Controller-0-to-broker-0-send-thread], 
> Controller 0 epoch 6 failed to send request 
> Name:UpdateMetadataRequest;Version:0;Controller:0;ControllerEpoch:6;CorrelationId:48;ClientId:id_0-host_192.168.10.21-port_9092;AliveBrokers:id:0,host:192.168.10.21,port:9092;PartitionState:[40963064-cdd2-4cd1-937a-9827d3ab77ad,0]
>  -> 
> (LeaderAndIsrInfo:(Leader:0,ISR:0,LeaderEpoch:0,ControllerEpoch:6),ReplicationFactor:1),AllReplicas:0)
>  to broker id:0,host:192.168.10.21,port:9092. Reconnecting to broker. 
> (kafka.controller.RequestSendThread)
> java.nio.channels.ClosedChannelException
>   at kafka.network.BlockingChannel.send(BlockingChannel.scala:97)
>   at 
> kafka.controller.RequestSendThread.liftedTree1$1(ControllerChannelManager.scala:132)
>   at 
> kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:131)
>   at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1786) implement a global configuration feature for brokers

2015-01-09 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14271871#comment-14271871
 ] 

Joe Stein commented on KAFKA-1786:
--

Hey [~nehanarkhede] I had sent this out on the mailing list a while back as 
part of the CLI tool changes (parent ticket) 
http://search-hadoop.com/m/4TaT4CJpj11 also with confluence page too 
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Command+Line+and+Related+Improvements

> implement a global configuration feature for brokers
> 
>
> Key: KAFKA-1786
> URL: https://issues.apache.org/jira/browse/KAFKA-1786
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Joe Stein
>Assignee: Andrii Biletskyi
> Fix For: 0.8.3
>
> Attachments: KAFKA-1786.patch
>
>
> Global level configurations (much like topic level) for brokers are managed 
> by humans and automation systems through server.properties.  
> Some configuration make sense to use default (like it is now) or override 
> from central location (zookeeper for now). We can modify this through the new 
> CLI tool so that every broker can have exact same setting.  Some 
> configurations we should allow to be overriden from server.properties (like 
> port) but others we should use the global store as source of truth (e.g. auto 
> topic enable, fetch replica message size, etc). Since most configuration I 
> believe are going to fall into this category we should have the list of 
> server.properties that can override the global config in the code in a list 
> which we can manage... everything else the global takes precedence. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1792) change behavior of --generate to produce assignment config with fair replica distribution and minimal number of reassignments

2015-01-09 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14271872#comment-14271872
 ] 

Joe Stein commented on KAFKA-1792:
--

[~nehanarkhede] Dmitry has been out on vacation and should be able to pick this 
back up once he returns, sorry for radio silence but it was due to that

> change behavior of --generate to produce assignment config with fair replica 
> distribution and minimal number of reassignments
> -
>
> Key: KAFKA-1792
> URL: https://issues.apache.org/jira/browse/KAFKA-1792
> Project: Kafka
>  Issue Type: Sub-task
>  Components: tools
>Reporter: Dmitry Pekar
>Assignee: Dmitry Pekar
> Fix For: 0.8.3
>
> Attachments: KAFKA-1792.patch, KAFKA-1792_2014-12-03_19:24:56.patch, 
> KAFKA-1792_2014-12-08_13:42:43.patch, KAFKA-1792_2014-12-19_16:48:12.patch, 
> generate_alg_tests.txt
>
>
> Current implementation produces fair replica distribution between specified 
> list of brokers. Unfortunately, it doesn't take
> into account current replica assignment.
> So if we have, for instance, 3 brokers id=[0..2] and are going to add fourth 
> broker id=3, 
> generate will create an assignment config which will redistribute replicas 
> fairly across brokers [0..3] 
> in the same way as those partitions were created from scratch. It will not 
> take into consideration current replica 
> assignment and accordingly will not try to minimize number of replica moves 
> between brokers.
> As proposed by [~charmalloc] this should be improved. New output of improved 
> --generate algorithm should suite following requirements:
> - fairness of replica distribution - every broker will have R or R+1 replicas 
> assigned;
> - minimum of reassignments - number of replica moves between brokers will be 
> minimal;
> Example.
> Consider following replica distribution per brokers [0..3] (we just added 
> brokers 2 and 3):
> - broker - 0, 1, 2, 3 
> - replicas - 7, 6, 0, 0
> The new algorithm will produce following assignment:
> - broker - 0, 1, 2, 3 
> - replicas - 4, 3, 3, 3
> - moves - -3, -3, +3, +3
> It will be fair and number of moves will be 6, which is minimal for specified 
> initial distribution.
> The scope of this issue is:
> - design an algorithm matching the above requirements;
> - implement this algorithm and unit tests;
> - test it manually using different initial assignments;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1853) Unsuccesful suffix rename attempt of LogSegment can leak files and also leave the LogSegment in an invalid state

2015-01-08 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1853:
-
Fix Version/s: 0.8.3

> Unsuccesful suffix rename attempt of LogSegment can leak files and also leave 
> the LogSegment in an invalid state
> 
>
> Key: KAFKA-1853
> URL: https://issues.apache.org/jira/browse/KAFKA-1853
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.1.1
>Reporter: jaikiran pai
> Fix For: 0.8.3
>
>
> As noted in this discussion in the user mailing list 
> http://mail-archives.apache.org/mod_mbox/kafka-users/201501.mbox/%3C54AE3661.8080007%40gmail.com%3E
>  an unsuccessful attempt at renaming the underlying files of a LogSegment can 
> lead to file leaks and also leave the LogSegment in an invalid state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1850) Failed reassignment leads to additional replica

2015-01-08 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14270551#comment-14270551
 ] 

Joe Stein commented on KAFKA-1850:
--

An attachment will make it easier for folks to look at your outputs and/or logs.

What version of zookeeper are you using? What distribution? 

Do you know what broker is the controller? 

Check out all of the logs on the broker that is the controller. Look for 
errors. Upload those too.

> Failed reassignment leads to additional replica
> ---
>
> Key: KAFKA-1850
> URL: https://issues.apache.org/jira/browse/KAFKA-1850
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.8.1
> Environment: CentOS  (Linux Kernel 2.6.32-71.el6.x86_64 )
>Reporter: Alex Tian
>Assignee: Neha Narkhede
>Priority: Minor
>  Labels: newbie
>   Original Estimate: 504h
>  Remaining Estimate: 504h
>
> When I start a topic reassignment (Totally 36 partitions) in my Kafka 
> Cluster, 24 partitions succeeded and 12 ones failed. However, the 12 failed 
> partitions have more replicas. I think the reason is that  AR still consists 
> of RAR and OAR although the reassignment for the partition failed. Could we 
> regard this problem as a bug? Quite sorry if any mistake in my question, 
> since I am a beginner for Kafka.
> This is the output from operation: 
> 1. alex-topics-to-move.json:
> {"topics": [{"topic": "testingTopic"}],
>  "version":1
> }
> 2. Generate a reassignment plan
> $./kafka-reassign-partitions.sh  --generate  --broker-list 0,1,2,3,4 
> --topics-to-move-json-file ./alex-topics-to-move.json   --zookeeper 
> 192.168.112.95:2181,192.168.112.96:2181,192.168.112.97:2181,192.168.112.98:2181,192.168.112.99:2181
> Current partition replica assignment
> {"version":1,
>  "partitions":[   {"topic":"testingTopic","partition":27,"replicas":[0,2]},
>
> {"topic":"testingTopic","partition":1,"replicas":[1,2]},
>   {"topic":"testingTopic","partition":12,"replicas":[0,1]},
>   {"topic":"testingTopic","partition":6,"replicas":[0,1]},
>   {"topic":"testingTopic","partition":16,"replicas":[1,0]},
>   {"topic":"testingTopic","partition":32,"replicas":[2,0]},
>   {"topic":"testingTopic","partition":18,"replicas":[0,1]},
>   {"topic":"testingTopic","partition":31,"replicas":[1,2]},
>   {"topic":"testingTopic","partition":9,"replicas":[0,2]},
>   {"topic":"testingTopic","partition":23,"replicas":[2,1]},
>   {"topic":"testingTopic","partition":19,"replicas":[1,2]},
>   {"topic":"testingTopic","partition":34,"replicas":[1,0]},
>   {"topic":"testingTopic","partition":17,"replicas":[2,1]},
>   {"topic":"testingTopic","partition":7,"replicas":[1,2]},
>   {"topic":"testingTopic","partition":20,"replicas":[2,0]},
>   {"topic":"testingTopic","partition":8,"replicas":[2,0]},
>   {"topic":"testingTopic","partition":11,"replicas":[2,1]},
>   {"topic":"testingTopic","partition":3,"replicas":[0,2]},
>   {"topic":"testingTopic","partition":30,"replicas":[0,1]},
>   {"topic":"testingTopic","partition":35,"replicas":[2,1]},
>   {"topic":"testingTopic","partition":26,"replicas":[2,0]},
>   {"topic":"testingTopic","partition":22,"replicas":[1,0]},
>   {"topic":"testingTopic","partition":10,"replicas":[1,0]},
>   {"topic":"testingTopic","partition":24,"replicas":[0,1]},
>   {"topic":"testingTopic","partition":21,"replicas":[0,2]},
>   {"topic":"testingTopic","partition":15,"replicas":[0,2]},
>   {"topic":"testingTopic","partition":4,"replicas":[1,0]},
>   {"topic":"testingTopic","partition":28,"replicas":[1,0]},
>   {"topic":"testingTopic","partition":25,"replicas":[1,2]},:
>   {"topic":"testingTopic","partition":14,"replicas":[2,0]},
>   {"topic":"testingTopic","partition":2,"replicas":[2,0]},
>   {"topic":"testingTopic","partition":13,"replicas":[1,2]},
>   {"topic":"testingTopic","partition":5,"replicas":[2,1]},
>   {"topic":"testingTopic","partition":29,"replicas":[2,1]},
>   {"topic":"testingTopic","partition":33,"replicas":[0,2]},
>   {"topic":"testingTopic","partition":0,"replicas":[0,1]}]}
>  Proposed partition reassignment configuration  ( 
> alex-expand-cluster-reassignment.json )
> {"version":1,
>  "partitions":[   
> {"topic":

[jira] [Commented] (KAFKA-1847) Update Readme to reflect changes in gradle wrapper

2015-01-08 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14269478#comment-14269478
 ] 

Joe Stein commented on KAFKA-1847:
--

Hi [~thinktainer] this is already updated 
https://github.com/apache/kafka/blob/0.8.2/README.md#first-bootstrap-and-download-the-wrapper
 what more were you thinking should be done?

> Update Readme to reflect changes in gradle wrapper
> --
>
> Key: KAFKA-1847
> URL: https://issues.apache.org/jira/browse/KAFKA-1847
> Project: Kafka
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 0.8.2, 0.8.3, 0.9.0
>Reporter: Martin Schinz
>Priority: Minor
>  Labels: documentation, easyfix, newbie
>
> KAFKA-1490 removed a dependency on a binary. This 
> [comment|https://issues.apache.org/jira/browse/KAFKA-1490?focusedCommentId=14157865&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14157865]
>  reflects the changes to the build process. The documentation should be 
> updated to make users aware of this requirement.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1722) static analysis code coverage for pci audit needs

2015-01-07 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1722:
-
Reviewer: Joe Stein

> static analysis code coverage for pci audit needs
> -
>
> Key: KAFKA-1722
> URL: https://issues.apache.org/jira/browse/KAFKA-1722
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Reporter: Joe Stein
>Assignee: Ashish Kumar Singh
> Fix For: 0.9.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1756) never allow the replica fetch size to be less than the max message size

2015-01-07 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14268585#comment-14268585
 ] 

Joe Stein commented on KAFKA-1756:
--

HI [~sriharsha] we have a patch for global configuration management (where we 
store broker configs in ZK) https://issues.apache.org/jira/browse/KAFKA-1786 so 
once that is done it should make this easier to implement.

> never allow the replica fetch size to be less than the max message size
> ---
>
> Key: KAFKA-1756
> URL: https://issues.apache.org/jira/browse/KAFKA-1756
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.1.1, 0.8.2
>Reporter: Joe Stein
>Priority: Blocker
> Fix For: 0.8.3
>
>
> There exists a very hazardous scenario where if the max.message.bytes is 
> greather than the replica.fetch.max.bytes the message will never replicate. 
> This will bring the ISR down to 1 (eventually/quickly once 
> replica.lag.max.messages is reached). If during this window the leader itself 
> goes out of the ISR then the new leader will commit the last offset it 
> replicated. This is also bad for sync producers with -1 ack because they will 
> all block (heard affect caused upstream) in this scenario too.
> The fix here is two fold
> 1) when setting max.message.bytes using kafka-topics we must check first each 
> and every broker (which will need some thought about how todo this because of 
> the topiccommand zk notification) that max.message.bytes <= 
> replica.fetch.max.bytes and if it is NOT then DO NOT create the topic
> 2) if you change this in server.properties then the broker should not start 
> if max.message.bytes > replica.fetch.max.bytes
> This does beg the question/issue some about centralizing certain/some/all 
> configurations so that inconsistencies do not occur (where broker 1 has 
> max.message.bytes > replica.fetch.max.bytes but broker 2 max.message.bytes <= 
> replica.fetch.max.bytes because of error in properties). I do not want to 
> conflate this ticket but I think it is worth mentioning/bringing up here as 
> it is a good example where it could make sense. 
> I set this as BLOCKER for 0.8.2-beta because we did so much work to enable 
> consistency vs availability and 0 data loss this corner case should be part 
> of 0.8.2-final
> Also, I could go one step further (though I would not consider this part as a 
> blocker for 0.8.2 but interested to what other folks think) about a consumer 
> replica fetch size so that if the message max is increased messages will no 
> longer be consumed (since the consumer fetch max would be <  max.message.bytes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-1722) static analysis code coverage for pci audit needs

2015-01-07 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein reassigned KAFKA-1722:


Assignee: Ashish Kumar Singh

Absolutely! 

> static analysis code coverage for pci audit needs
> -
>
> Key: KAFKA-1722
> URL: https://issues.apache.org/jira/browse/KAFKA-1722
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Reporter: Joe Stein
>Assignee: Ashish Kumar Singh
> Fix For: 0.9.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1836) metadata.fetch.timeout.ms set to zero blocks forever

2015-01-06 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1836:
-
Fix Version/s: 0.8.3

> metadata.fetch.timeout.ms set to zero blocks forever
> 
>
> Key: KAFKA-1836
> URL: https://issues.apache.org/jira/browse/KAFKA-1836
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.8.2
>Reporter: Paul Pearcy
>Priority: Minor
>  Labels: newbie
> Fix For: 0.8.3
>
> Attachments: KAFKA-1836.patch
>
>
> You can easily work around this by setting the timeout value to 1ms, but 0ms 
> should mean 0ms or at least have the behavior documented. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1512) Limit the maximum number of connections per ip address

2015-01-06 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1512:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

committed to 0.8.2 branch and trunk, thanks for the patch Jeff

> Limit the maximum number of connections per ip address
> --
>
> Key: KAFKA-1512
> URL: https://issues.apache.org/jira/browse/KAFKA-1512
> Project: Kafka
>  Issue Type: New Feature
>Affects Versions: 0.8.2
>Reporter: Jay Kreps
>Assignee: Jeff Holoman
> Fix For: 0.8.2
>
> Attachments: KAFKA-1512-082.patch, KAFKA-1512.patch, 
> KAFKA-1512.patch, KAFKA-1512_2014-07-03_15:17:55.patch, 
> KAFKA-1512_2014-07-14_13:28:15.patch, KAFKA-1512_2014-12-23_21:47:23.patch
>
>
> To protect against client connection leaks add a new configuration
>   max.connections.per.ip
> that causes the SocketServer to enforce a limit on the maximum number of 
> connections from each InetAddress instance. For backwards compatibility this 
> will default to 2 billion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1489) Global threshold on data retention size

2015-01-06 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1489:
-
Fix Version/s: (was: 0.8.2)
   0.8.3
 Assignee: (was: Jay Kreps)

> Global threshold on data retention size
> ---
>
> Key: KAFKA-1489
> URL: https://issues.apache.org/jira/browse/KAFKA-1489
> Project: Kafka
>  Issue Type: New Feature
>  Components: log
>Affects Versions: 0.8.1.1
>Reporter: Andras Sereny
>  Labels: newbie
> Fix For: 0.8.3
>
>
> Currently, Kafka has per topic settings to control the size of one single log 
> (log.retention.bytes). With lots of topics of different volume and as they 
> grow in number, it could become tedious to maintain topic level settings 
> applying to a single log. 
> Often, a chunk of disk space is dedicated to Kafka that hosts all logs 
> stored, so it'd make sense to have a configurable threshold to control how 
> much space *all* data in one Kafka log data directory can take up.
> See also:
> http://mail-archives.apache.org/mod_mbox/kafka-users/201406.mbox/browser
> http://mail-archives.apache.org/mod_mbox/kafka-users/201311.mbox/%3c20131107015125.gc9...@jkoshy-ld.linkedin.biz%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1489) Global threshold on data retention size

2015-01-06 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1489:
-
Fix Version/s: 0.8.2

> Global threshold on data retention size
> ---
>
> Key: KAFKA-1489
> URL: https://issues.apache.org/jira/browse/KAFKA-1489
> Project: Kafka
>  Issue Type: New Feature
>  Components: log
>Affects Versions: 0.8.1.1
>Reporter: Andras Sereny
>Assignee: Jay Kreps
>  Labels: newbie
> Fix For: 0.8.2
>
>
> Currently, Kafka has per topic settings to control the size of one single log 
> (log.retention.bytes). With lots of topics of different volume and as they 
> grow in number, it could become tedious to maintain topic level settings 
> applying to a single log. 
> Often, a chunk of disk space is dedicated to Kafka that hosts all logs 
> stored, so it'd make sense to have a configurable threshold to control how 
> much space *all* data in one Kafka log data directory can take up.
> See also:
> http://mail-archives.apache.org/mod_mbox/kafka-users/201406.mbox/browser
> http://mail-archives.apache.org/mod_mbox/kafka-users/201311.mbox/%3c20131107015125.gc9...@jkoshy-ld.linkedin.biz%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1841) OffsetCommitRequest API - timestamp field is not versioned

2015-01-05 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14265798#comment-14265798
 ] 

Joe Stein commented on KAFKA-1841:
--

In addition to the issue you bring up, the functionality as a whole has 
changed.. when you call OffsetFetchRequest the version = 0 needs to preserve 
the old functionality 
https://github.com/apache/kafka/blob/0.8.1/core/src/main/scala/kafka/server/KafkaApis.scala#L678-L700
 and version = 1 the new 
https://github.com/apache/kafka/blob/0.8.2/core/src/main/scala/kafka/server/KafkaApis.scala#L153-L223.
 Also the OffsetFetchRequest functionality even though the wire protocol is the 
same after the 0.8.2 upgrade for OffsetFetchRequest if you were using 0.8.1.1 
OffsetFetchRequest 
https://github.com/apache/kafka/blob/0.8.1/core/src/main/scala/kafka/server/KafkaApis.scala#L705-L728
 will stop going to zookeeper and start going to Kafka storage 
https://github.com/apache/kafka/blob/0.8.2/core/src/main/scala/kafka/server/KafkaApis.scala#L504-L519
 so more errors will happen and things break too.  

> OffsetCommitRequest API - timestamp field is not versioned
> --
>
> Key: KAFKA-1841
> URL: https://issues.apache.org/jira/browse/KAFKA-1841
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2
> Environment: wire-protocol
>Reporter: Dana Powers
>Priority: Blocker
> Fix For: 0.8.2
>
>
> Timestamp field was added to the OffsetCommitRequest wire protocol api for 
> 0.8.2 by KAFKA-1012 .  The 0.8.1.1 server does not support the timestamp 
> field, so I think the api version of OffsetCommitRequest should be 
> incremented and checked by the 0.8.2 kafka server before attempting to read a 
> timestamp from the network buffer in OffsetCommitRequest.readFrom 
> (core/src/main/scala/kafka/api/OffsetCommitRequest.scala)
> It looks like a subsequent patch (KAFKA-1462) added another api change to 
> support a new constructor w/ params generationId and consumerId, calling that 
> version 1, and a pending patch (KAFKA-1634) adds retentionMs as another 
> field, while possibly removing timestamp altogether, calling this version 2.  
> So the fix here is not straightforward enough for me to submit a patch.
> This could possibly be merged into KAFKA-1634, but opening as a separate 
> Issue because I believe the lack of versioning in the current trunk should 
> block 0.8.2 release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1841) OffsetCommitRequest API - timestamp field is not versioned

2015-01-05 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1841:
-
Fix Version/s: 0.8.2

> OffsetCommitRequest API - timestamp field is not versioned
> --
>
> Key: KAFKA-1841
> URL: https://issues.apache.org/jira/browse/KAFKA-1841
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2
> Environment: wire-protocol
>Reporter: Dana Powers
>Priority: Blocker
> Fix For: 0.8.2
>
>
> Timestamp field was added to the OffsetCommitRequest wire protocol api for 
> 0.8.2 by KAFKA-1012 .  The 0.8.1.1 server does not support the timestamp 
> field, so I think the api version of OffsetCommitRequest should be 
> incremented and checked by the 0.8.2 kafka server before attempting to read a 
> timestamp from the network buffer in OffsetCommitRequest.readFrom 
> (core/src/main/scala/kafka/api/OffsetCommitRequest.scala)
> It looks like a subsequent patch (KAFKA-1462) added another api change to 
> support a new constructor w/ params generationId and consumerId, calling that 
> version 1, and a pending patch (KAFKA-1634) adds retentionMs as another 
> field, while possibly removing timestamp altogether, calling this version 2.  
> So the fix here is not straightforward enough for me to submit a patch.
> This could possibly be merged into KAFKA-1634, but opening as a separate 
> Issue because I believe the lack of versioning in the current trunk should 
> block 0.8.2 release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1012) Implement an Offset Manager and hook offset requests to it

2015-01-05 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1012:
-
Fix Version/s: 0.8.2

> Implement an Offset Manager and hook offset requests to it
> --
>
> Key: KAFKA-1012
> URL: https://issues.apache.org/jira/browse/KAFKA-1012
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Tejas Patil
>Assignee: Tejas Patil
>Priority: Minor
> Fix For: 0.8.2
>
> Attachments: KAFKA-1012-v2.patch, KAFKA-1012.patch
>
>
> After KAFKA-657, we have a protocol for consumers to commit and fetch offsets 
> from brokers. Currently, consumers are not using this API and directly 
> talking with Zookeeper. 
> This Jira will involve following:
> 1. Add a special topic in kafka for storing offsets
> 2. Add an OffsetManager interface which would handle storing, accessing, 
> loading and maintaining consumer offsets
> 3. Implement offset managers for both of these 2 choices : existing ZK based 
> storage or inbuilt storage for offsets.
> 4. Leader brokers would now maintain an additional hash table of offsets for 
> the group-topic-partitions that they lead
> 5. Consumers should now use the OffsetCommit and OffsetFetch API



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1512) Limit the maximum number of connections per ip address

2015-01-05 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14264667#comment-14264667
 ] 

Joe Stein commented on KAFKA-1512:
--

I was just about to commit this and realized this is introduced in 0.8.2 but 
not fully complete so we should have this as a patch for 0.8.2 and trunk. 
[~jholoman] can you upload a 0.8.2 patch also so we can double commit this to 
0.8.2 and trunk please... if there are no objects to having this in 0.8.2 it 
seems reasonable since it was introduced in this release we shouldn't ship 
something not complete if we have fix available now.

> Limit the maximum number of connections per ip address
> --
>
> Key: KAFKA-1512
> URL: https://issues.apache.org/jira/browse/KAFKA-1512
> Project: Kafka
>  Issue Type: New Feature
>Affects Versions: 0.8.2
>Reporter: Jay Kreps
>Assignee: Jeff Holoman
> Fix For: 0.8.2
>
> Attachments: KAFKA-1512.patch, KAFKA-1512.patch, 
> KAFKA-1512_2014-07-03_15:17:55.patch, KAFKA-1512_2014-07-14_13:28:15.patch, 
> KAFKA-1512_2014-12-23_21:47:23.patch
>
>
> To protect against client connection leaks add a new configuration
>   max.connections.per.ip
> that causes the SocketServer to enforce a limit on the maximum number of 
> connections from each InetAddress instance. For backwards compatibility this 
> will default to 2 billion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1512) Limit the maximum number of connections per ip address

2015-01-05 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1512:
-
Affects Version/s: 0.8.2

> Limit the maximum number of connections per ip address
> --
>
> Key: KAFKA-1512
> URL: https://issues.apache.org/jira/browse/KAFKA-1512
> Project: Kafka
>  Issue Type: New Feature
>Affects Versions: 0.8.2
>Reporter: Jay Kreps
>Assignee: Jeff Holoman
> Fix For: 0.8.2
>
> Attachments: KAFKA-1512.patch, KAFKA-1512.patch, 
> KAFKA-1512_2014-07-03_15:17:55.patch, KAFKA-1512_2014-07-14_13:28:15.patch, 
> KAFKA-1512_2014-12-23_21:47:23.patch
>
>
> To protect against client connection leaks add a new configuration
>   max.connections.per.ip
> that causes the SocketServer to enforce a limit on the maximum number of 
> connections from each InetAddress instance. For backwards compatibility this 
> will default to 2 billion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1753) add --decommission-broker option

2014-12-31 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1753:
-
Status: Patch Available  (was: Open)

This is in KAFKA-1792

> add --decommission-broker option
> 
>
> Key: KAFKA-1753
> URL: https://issues.apache.org/jira/browse/KAFKA-1753
> Project: Kafka
>  Issue Type: Sub-task
>  Components: tools
>Reporter: Dmitry Pekar
>Assignee: Dmitry Pekar
> Fix For: 0.8.3
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-1776) Re-factor out existing tools that have been implemented behind the CLI

2014-12-31 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein reassigned KAFKA-1776:


Assignee: (was: Andrii Biletskyi)

> Re-factor out existing tools that have been implemented behind the CLI
> --
>
> Key: KAFKA-1776
> URL: https://issues.apache.org/jira/browse/KAFKA-1776
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Joe Stein
>Priority: Minor
> Fix For: 0.9.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-1777) Re-factor reasign-partitions into CLI

2014-12-31 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein reassigned KAFKA-1777:


Assignee: Andrii Biletskyi

> Re-factor reasign-partitions into CLI
> -
>
> Key: KAFKA-1777
> URL: https://issues.apache.org/jira/browse/KAFKA-1777
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Joe Stein
>Assignee: Andrii Biletskyi
> Fix For: 0.8.3
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-1776) Re-factor out existing tools that have been implemented behind the CLI

2014-12-31 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein reassigned KAFKA-1776:


Assignee: Andrii Biletskyi

> Re-factor out existing tools that have been implemented behind the CLI
> --
>
> Key: KAFKA-1776
> URL: https://issues.apache.org/jira/browse/KAFKA-1776
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Joe Stein
>Assignee: Andrii Biletskyi
>Priority: Minor
> Fix For: 0.9.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1775) Re-factor TopicCommand into thew handerAdminMessage call

2014-12-31 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1775:
-
Status: Patch Available  (was: Reopened)

> Re-factor TopicCommand into thew handerAdminMessage call 
> -
>
> Key: KAFKA-1775
> URL: https://issues.apache.org/jira/browse/KAFKA-1775
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Joe Stein
>Assignee: Andrii Biletskyi
> Fix For: 0.8.3
>
>
> kafka-topic.sh should become
> kafka --topic --everything else the same from the CLI perspective so we need 
> to have the calls from the byte lalery get fed into that same code (few 
> changes as possible called from the handleAdmin call after deducing what 
> "Utility"[1] it is operating for 
> I think we should not remove the existing kafka-topic.sh and preserve the 
> existing functionality (with as little code duplication as possible) until 
> 0.9 (and there we can remove it after folks have used it for a release or two 
> and feedback and the rest)[2]
> [1] https://issues.apache.org/jira/browse/KAFKA-1772
> [2] https://issues.apache.org/jira/browse/KAFKA-1776



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1774) REPL and Shell Client for Admin Message RQ/RP

2014-12-31 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1774:
-
Status: Patch Available  (was: Reopened)

> REPL and Shell Client for Admin Message RQ/RP
> -
>
> Key: KAFKA-1774
> URL: https://issues.apache.org/jira/browse/KAFKA-1774
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Joe Stein
>Assignee: Andrii Biletskyi
> Fix For: 0.8.3
>
>
> We should have a REPL we can work in and execute the commands with the 
> arguments. With this we can do:
> ./kafka.sh --shell 
> kafka>attach cluster -b localhost:9092;
> kafka>describe topic sampleTopicNameForExample;
> the command line version can work like it does now so folks don't have to 
> re-write all of their tooling.
> kafka.sh --topics --everything the same like kafka-topics.sh is 
> kafka.sh --reassign --everything the same like kafka-reassign-partitions.sh 
> is 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1825) leadership election state is stale and never recovers without all brokers restarting

2014-12-29 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1825:
-
Fix Version/s: (was: 0.8.2)
   0.8.3

> leadership election state is stale and never recovers without all brokers 
> restarting
> 
>
> Key: KAFKA-1825
> URL: https://issues.apache.org/jira/browse/KAFKA-1825
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.1.1, 0.8.2
>Reporter: Joe Stein
>Priority: Critical
> Fix For: 0.8.3
>
> Attachments: KAFKA-1825.executable.tgz
>
>
> I am not sure what is the cause here but I can succinctly and repeatedly  
> reproduce this issue. I tried with 0.8.1.1 and 0.8.2-beta and both behave in 
> the same manner.
> The code to reproduce this is here 
> https://github.com/stealthly/go_kafka_client/tree/wipAsyncSaramaProducer/producers
> scenario 3 brokers, 1 zookeeper, 1 client (each AWS c3.2xlarge instances)
> create topic 
> producer client sends in 380,000 messages/sec (attached executable)
> everything is fine until you kill -SIGTERM broker #2 
> then at that point the state goes bad for that topic.  even trying to use the 
> console producer (with the sarama producer off) doesn't work.
> doing a describe the yoyoma topic looks fine, ran prefered leadership 
> election lots of issues... still can't produce... only resolution is bouncing 
> all brokers :(
> root@ip-10-233-52-139:/opt/kafka_2.10-0.8.1.1# bin/kafka-topics.sh 
> --zookeeper 10.218.189.234:2181 --describe
> Topic:yoyoma  PartitionCount:36   ReplicationFactor:3 Configs:
>   Topic: yoyoma   Partition: 0Leader: 1   Replicas: 1,2,3 Isr: 1,3
>   Topic: yoyoma   Partition: 1Leader: 1   Replicas: 2,3,1 Isr: 1,3
>   Topic: yoyoma   Partition: 2Leader: 1   Replicas: 3,1,2 Isr: 1,3
>   Topic: yoyoma   Partition: 3Leader: 1   Replicas: 1,3,2 Isr: 1,3
>   Topic: yoyoma   Partition: 4Leader: 1   Replicas: 2,1,3 Isr: 1,3
>   Topic: yoyoma   Partition: 5Leader: 1   Replicas: 3,2,1 Isr: 1,3
>   Topic: yoyoma   Partition: 6Leader: 1   Replicas: 1,2,3 Isr: 1,3
>   Topic: yoyoma   Partition: 7Leader: 1   Replicas: 2,3,1 Isr: 1,3
>   Topic: yoyoma   Partition: 8Leader: 1   Replicas: 3,1,2 Isr: 1,3
>   Topic: yoyoma   Partition: 9Leader: 1   Replicas: 1,3,2 Isr: 1,3
>   Topic: yoyoma   Partition: 10   Leader: 1   Replicas: 2,1,3 Isr: 1,3
>   Topic: yoyoma   Partition: 11   Leader: 1   Replicas: 3,2,1 Isr: 1,3
>   Topic: yoyoma   Partition: 12   Leader: 1   Replicas: 1,2,3 Isr: 1,3
>   Topic: yoyoma   Partition: 13   Leader: 1   Replicas: 2,3,1 Isr: 1,3
>   Topic: yoyoma   Partition: 14   Leader: 1   Replicas: 3,1,2 Isr: 1,3
>   Topic: yoyoma   Partition: 15   Leader: 1   Replicas: 1,3,2 Isr: 1,3
>   Topic: yoyoma   Partition: 16   Leader: 1   Replicas: 2,1,3 Isr: 1,3
>   Topic: yoyoma   Partition: 17   Leader: 1   Replicas: 3,2,1 Isr: 1,3
>   Topic: yoyoma   Partition: 18   Leader: 1   Replicas: 1,2,3 Isr: 1,3
>   Topic: yoyoma   Partition: 19   Leader: 1   Replicas: 2,3,1 Isr: 1,3
>   Topic: yoyoma   Partition: 20   Leader: 1   Replicas: 3,1,2 Isr: 1,3
>   Topic: yoyoma   Partition: 21   Leader: 1   Replicas: 1,3,2 Isr: 1,3
>   Topic: yoyoma   Partition: 22   Leader: 1   Replicas: 2,1,3 Isr: 1,3
>   Topic: yoyoma   Partition: 23   Leader: 1   Replicas: 3,2,1 Isr: 1,3
>   Topic: yoyoma   Partition: 24   Leader: 1   Replicas: 1,2,3 Isr: 1,3
>   Topic: yoyoma   Partition: 25   Leader: 1   Replicas: 2,3,1 Isr: 1,3
>   Topic: yoyoma   Partition: 26   Leader: 1   Replicas: 3,1,2 Isr: 1,3
>   Topic: yoyoma   Partition: 27   Leader: 1   Replicas: 1,3,2 Isr: 1,3
>   Topic: yoyoma   Partition: 28   Leader: 1   Replicas: 2,1,3 Isr: 1,3
>   Topic: yoyoma   Partition: 29   Leader: 1   Replicas: 3,2,1 Isr: 1,3
>   Topic: yoyoma   Partition: 30   Leader: 1   Replicas: 1,2,3 Isr: 1,3
>   Topic: yoyoma   Partition: 31   Leader: 1   Replicas: 2,3,1 Isr: 1,3
>   Topic: yoyoma   Partition: 32   Leader: 1   Replicas: 3,1,2 Isr: 1,3
>   Topic: yoyoma   Partition: 33   Leader: 1   Replicas: 1,3,2 Isr: 1,3
>   Topic: yoyoma   Partition: 34   Leader: 1   Replicas: 2,1,3 Isr: 1,3
>   Topic: yoyoma   Partition: 35   Leader: 1   Replicas: 3,2,1 Isr: 1,3
> root@ip-10-233-52-139:/opt/kafka_2.10-0.8.1.1# 
> bin/kafka-preferred-replica-election.sh --zookeeper 10.218.189.234:2181
> Successfully started preferred replica election for partitions 
> Set([yoyoma,29], [yoyoma,14], [yoyoma,22], [yoyoma,15], [yoyoma,3], 
> [yoyoma,11], [yoyoma,

[jira] [Updated] (KAFKA-1650) Mirror Maker could lose data on unclean shutdown.

2014-12-29 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1650:
-
Fix Version/s: 0.8.3

> Mirror Maker could lose data on unclean shutdown.
> -
>
> Key: KAFKA-1650
> URL: https://issues.apache.org/jira/browse/KAFKA-1650
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.8.3
>
> Attachments: KAFKA-1650.patch, KAFKA-1650_2014-10-06_10:17:46.patch, 
> KAFKA-1650_2014-11-12_09:51:30.patch, KAFKA-1650_2014-11-17_18:44:37.patch, 
> KAFKA-1650_2014-11-20_12:00:16.patch, KAFKA-1650_2014-11-24_08:15:17.patch, 
> KAFKA-1650_2014-12-03_15:02:31.patch, KAFKA-1650_2014-12-03_19:02:13.patch, 
> KAFKA-1650_2014-12-04_11:59:07.patch, KAFKA-1650_2014-12-06_18:58:57.patch, 
> KAFKA-1650_2014-12-08_01:36:01.patch, KAFKA-1650_2014-12-16_08:03:45.patch, 
> KAFKA-1650_2014-12-17_12:29:23.patch, KAFKA-1650_2014-12-18_18:48:18.patch, 
> KAFKA-1650_2014-12-18_22:17:08.patch, KAFKA-1650_2014-12-18_22:53:26.patch, 
> KAFKA-1650_2014-12-18_23:41:16.patch, KAFKA-1650_2014-12-22_19:07:24.patch, 
> KAFKA-1650_2014-12-23_07:04:28.patch, KAFKA-1650_2014-12-23_16:44:06.patch
>
>
> Currently if mirror maker got shutdown uncleanly, the data in the data 
> channel and buffer could potentially be lost. With the new producer's 
> callback, this issue could be solved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1624) bump up default scala version to 2.10.4 to compile with java 8

2014-12-29 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14260579#comment-14260579
 ] 

Joe Stein commented on KAFKA-1624:
--

committed to 0.8.2 branch, just noticed the commit message says 2.11.4 but it 
is 2.10.4 per the JIRA title (just changed the title)

> bump up default scala version to 2.10.4 to compile with java 8
> --
>
> Key: KAFKA-1624
> URL: https://issues.apache.org/jira/browse/KAFKA-1624
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joe Stein
>Assignee: Guozhang Wang
>  Labels: newbie
> Fix For: 0.8.2
>
> Attachments: KAFKA-1624.patch, KAFKA-1624_2014-11-24_11:01:56.patch
>
>
> {code}
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; 
> support was removed in 8.0
> error: error while loading CharSequence, class file 
> '/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar(java/lang/CharSequence.class)' is 
> broken
> (class java.lang.RuntimeException/bad constant pool tag 18 at byte 10)
> error: error while loading Comparator, class file 
> '/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar(java/util/Comparator.class)' is 
> broken
> (class java.lang.RuntimeException/bad constant pool tag 18 at byte 20)
> error: error while loading AnnotatedElement, class file 
> '/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar(java/lang/reflect/AnnotatedElement.class)'
>  is broken
> (class java.lang.RuntimeException/bad constant pool tag 18 at byte 76)
> error: error while loading Arrays, class file 
> '/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar(java/util/Arrays.class)' is broken
> (class java.lang.RuntimeException/bad constant pool tag 18 at byte 765)
> /tmp/sbt_53783b12/xsbt/ExtractAPI.scala:395: error: java.util.Comparator does 
> not take type parameters
>   private[this] val sortClasses = new Comparator[Symbol] {
> ^
> 5 errors found
> :core:compileScala FAILED
> FAILURE: Build failed with an exception.
> * What went wrong:
> Execution failed for task ':core:compileScala'.
> > org.gradle.messaging.remote.internal.PlaceholderException (no error message)
> * Try:
> Run with --stacktrace option to get the stack trace. Run with --info or 
> --debug option to get more log output.
> BUILD FAILED
> Total time: 1 mins 48.298 secs
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1624) bump up default scala version to 2.10.4 to compile with java 8

2014-12-29 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1624:
-
Summary: bump up default scala version to 2.10.4 to compile with java 8  
(was: building on JDK 8 fails)

> bump up default scala version to 2.10.4 to compile with java 8
> --
>
> Key: KAFKA-1624
> URL: https://issues.apache.org/jira/browse/KAFKA-1624
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joe Stein
>Assignee: Guozhang Wang
>  Labels: newbie
> Fix For: 0.8.2
>
> Attachments: KAFKA-1624.patch, KAFKA-1624_2014-11-24_11:01:56.patch
>
>
> {code}
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; 
> support was removed in 8.0
> error: error while loading CharSequence, class file 
> '/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar(java/lang/CharSequence.class)' is 
> broken
> (class java.lang.RuntimeException/bad constant pool tag 18 at byte 10)
> error: error while loading Comparator, class file 
> '/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar(java/util/Comparator.class)' is 
> broken
> (class java.lang.RuntimeException/bad constant pool tag 18 at byte 20)
> error: error while loading AnnotatedElement, class file 
> '/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar(java/lang/reflect/AnnotatedElement.class)'
>  is broken
> (class java.lang.RuntimeException/bad constant pool tag 18 at byte 76)
> error: error while loading Arrays, class file 
> '/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar(java/util/Arrays.class)' is broken
> (class java.lang.RuntimeException/bad constant pool tag 18 at byte 765)
> /tmp/sbt_53783b12/xsbt/ExtractAPI.scala:395: error: java.util.Comparator does 
> not take type parameters
>   private[this] val sortClasses = new Comparator[Symbol] {
> ^
> 5 errors found
> :core:compileScala FAILED
> FAILURE: Build failed with an exception.
> * What went wrong:
> Execution failed for task ':core:compileScala'.
> > org.gradle.messaging.remote.internal.PlaceholderException (no error message)
> * Try:
> Run with --stacktrace option to get the stack trace. Run with --info or 
> --debug option to get more log output.
> BUILD FAILED
> Total time: 1 mins 48.298 secs
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-1781) Readme should specify that Gradle 2.0 is required for initial bootstrap

2014-12-29 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein resolved KAFKA-1781.
--
Resolution: Duplicate

[~nehanarkhede] done

> Readme should specify that Gradle 2.0 is required for initial bootstrap
> ---
>
> Key: KAFKA-1781
> URL: https://issues.apache.org/jira/browse/KAFKA-1781
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.8.2
>Reporter: Jean-Francois Im
>Priority: Blocker
> Fix For: 0.8.2
>
> Attachments: gradle-2.0-readme.patch
>
>
> Current README.md says "You need to have gradle installed."
> As the bootstrap procedure doesn't work with gradle 1.12, this needs to say 
> that 2.0 or greater is needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1822) Add "echo" request

2014-12-22 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14256574#comment-14256574
 ] 

Joe Stein commented on KAFKA-1822:
--

Can we utilize the ClusterMetadataRequest and ClusterMetadataResponse in the 
new command line patch https://issues.apache.org/jira/browse/KAFKA-1694 for 
this?  The use there is to contact any broker to find the controller for then 
admin use. You could just use it also as a ping to make sure you get response 
from every broker like you are saying. This controller is thin since everything 
for the new admin interface will then go to the broker that is the controller 
through the adminHandler post this call.

> Add "echo" request
> --
>
> Key: KAFKA-1822
> URL: https://issues.apache.org/jira/browse/KAFKA-1822
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
> Fix For: 0.8.3
>
>
> Currently there is no simple way to generate a request and validate we 
> receive a response without adding a lot of dependencies for the test.
> Kafka request classes have quite a few dependencies, so they are not really 
> usable when testing infrastructure components or clients.
> Generating a byte-array with meaningless request key id as it is done in 
> SocketServerTest results in unknown request exception that must be handled. 
> I suggest adding an EchoRequest, EchoResponse and EchoHandler. The Request 
> will be the usual header and a bytearray. The Response will be a response 
> header and the same bytearray echoed back.
> Should be useful for client developers and when testing infrastructure 
> changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1824) in ConsoleProducer - properties key.separator and parse.key no longer work

2014-12-22 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14256569#comment-14256569
 ] 

Joe Stein commented on KAFKA-1824:
--

Shouldn't this go into 0.8.2 branch also since it is a fix for a regression bug?

> in ConsoleProducer - properties key.separator and parse.key no longer work
> --
>
> Key: KAFKA-1824
> URL: https://issues.apache.org/jira/browse/KAFKA-1824
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
> Fix For: 0.8.3
>
> Attachments: KAFKA-1824.patch, KAFKA-1824.patch, 
> KAFKA-1824_2014-12-22_16:17:42.patch
>
>
> Looks like the change in kafka-1711 breaks them accidentally.
> reader.init is called with readerProps which is initialized with commandline 
> properties as defaults.
> the problem is that reader.init checks:
> if(props.containsKey("parse.key"))
> and defaults don't return true in this case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1806) broker can still expose uncommitted data to a consumer

2014-12-19 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14253786#comment-14253786
 ] 

Joe Stein commented on KAFKA-1806:
--

I am not sure if this is directly related but perhaps in some way possibly so I 
wanted to bring it up. I just created 
https://issues.apache.org/jira/browse/KAFKA-1825 which is a case where the 
Sarama client is putting Kafka in a bad state.  I suspect this might be the 
same type of scenario too.

[~lokeshbirla] is there some chance of getting code to reproduce your issue 
succinctly (please see my KAFKA-1825 sample code to reproduce and even a binary 
for folks to try out). 

<< sometimes this issue goes away however I see other problem of leadership 
changes very often even when all brokers are running.
This is a another issue I see in production with the Sarama client. I am 
working on hunting down the root cause but right now the thinking is that it is 
related to https://issues.apache.org/jira/browse/KAFKA-766 and 
https://github.com/Shopify/sarama/issues/236 with 
https://github.com/Shopify/sarama/commit/03ad601663634fd75eb357fee6782653f5a9a5ed
 being a fix for it.  

> broker can still expose uncommitted data to a consumer
> --
>
> Key: KAFKA-1806
> URL: https://issues.apache.org/jira/browse/KAFKA-1806
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.8.1.1
>Reporter: lokesh Birla
>Assignee: Neha Narkhede
>
> Although following issue: https://issues.apache.org/jira/browse/KAFKA-727
> is marked fixed but I still see this issue in 0.8.1.1. I am able to 
> reproducer the issue consistently. 
> [2014-08-18 06:43:58,356] ERROR [KafkaApi-1] Error when processing fetch 
> request for partition [mmetopic4,2] offset 1940029 from consumer with 
> correlation id 21 (kafka.server.Kaf
> kaApis)
> java.lang.IllegalArgumentException: Attempt to read with a maximum offset 
> (1818353) less than the start offset (1940029).
> at kafka.log.LogSegment.read(LogSegment.scala:136)
> at kafka.log.Log.read(Log.scala:386)
> at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$readMessageSet(KafkaApis.scala:530)
> at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1.apply(KafkaApis.scala:476)
> at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1.apply(KafkaApis.scala:471)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:233)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:233)
> at scala.collection.immutable.Map$Map1.foreach(Map.scala:119)
> at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:233)
> at scala.collection.immutable.Map$Map1.map(Map.scala:107)
> at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$readMessageSets(KafkaApis.scala:471)
> at 
> kafka.server.KafkaApis$FetchRequestPurgatory.expire(KafkaApis.scala:783)
> at 
> kafka.server.KafkaApis$FetchRequestPurgatory.expire(KafkaApis.scala:765)
> at 
> kafka.server.RequestPurgatory$ExpiredRequestReaper.run(RequestPurgatory.scala:216)
> at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1825) leadership election state is stale and never recovers without all brokers restarting

2014-12-19 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1825:
-
Attachment: KAFKA-1825.executable.tgz

attached build of code to reproduce ./producer on ubuntu

> leadership election state is stale and never recovers without all brokers 
> restarting
> 
>
> Key: KAFKA-1825
> URL: https://issues.apache.org/jira/browse/KAFKA-1825
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.1.1, 0.8.2
>Reporter: Joe Stein
>Priority: Critical
> Fix For: 0.8.2
>
> Attachments: KAFKA-1825.executable.tgz
>
>
> I am not sure what is the cause here but I can succinctly and repeatedly  
> reproduce this issue. I tried with 0.8.1.1 and 0.8.2-beta and both behave in 
> the same manner.
> The code to reproduce this is here 
> https://github.com/stealthly/go_kafka_client/tree/wipAsyncSaramaProducer/producers
> scenario 3 brokers, 1 zookeeper, 1 client (each AWS c3.2xlarge instances)
> create topic 
> producer client sends in 380,000 messages/sec (attached executable)
> everything is fine until you kill -SIGTERM broker #2 
> then at that point the state goes bad for that topic.  even trying to use the 
> console producer (with the sarama producer off) doesn't work.
> doing a describe the yoyoma topic looks fine, ran prefered leadership 
> election lots of issues... still can't produce... only resolution is bouncing 
> all brokers :(
> root@ip-10-233-52-139:/opt/kafka_2.10-0.8.1.1# bin/kafka-topics.sh 
> --zookeeper 10.218.189.234:2181 --describe
> Topic:yoyoma  PartitionCount:36   ReplicationFactor:3 Configs:
>   Topic: yoyoma   Partition: 0Leader: 1   Replicas: 1,2,3 Isr: 1,3
>   Topic: yoyoma   Partition: 1Leader: 1   Replicas: 2,3,1 Isr: 1,3
>   Topic: yoyoma   Partition: 2Leader: 1   Replicas: 3,1,2 Isr: 1,3
>   Topic: yoyoma   Partition: 3Leader: 1   Replicas: 1,3,2 Isr: 1,3
>   Topic: yoyoma   Partition: 4Leader: 1   Replicas: 2,1,3 Isr: 1,3
>   Topic: yoyoma   Partition: 5Leader: 1   Replicas: 3,2,1 Isr: 1,3
>   Topic: yoyoma   Partition: 6Leader: 1   Replicas: 1,2,3 Isr: 1,3
>   Topic: yoyoma   Partition: 7Leader: 1   Replicas: 2,3,1 Isr: 1,3
>   Topic: yoyoma   Partition: 8Leader: 1   Replicas: 3,1,2 Isr: 1,3
>   Topic: yoyoma   Partition: 9Leader: 1   Replicas: 1,3,2 Isr: 1,3
>   Topic: yoyoma   Partition: 10   Leader: 1   Replicas: 2,1,3 Isr: 1,3
>   Topic: yoyoma   Partition: 11   Leader: 1   Replicas: 3,2,1 Isr: 1,3
>   Topic: yoyoma   Partition: 12   Leader: 1   Replicas: 1,2,3 Isr: 1,3
>   Topic: yoyoma   Partition: 13   Leader: 1   Replicas: 2,3,1 Isr: 1,3
>   Topic: yoyoma   Partition: 14   Leader: 1   Replicas: 3,1,2 Isr: 1,3
>   Topic: yoyoma   Partition: 15   Leader: 1   Replicas: 1,3,2 Isr: 1,3
>   Topic: yoyoma   Partition: 16   Leader: 1   Replicas: 2,1,3 Isr: 1,3
>   Topic: yoyoma   Partition: 17   Leader: 1   Replicas: 3,2,1 Isr: 1,3
>   Topic: yoyoma   Partition: 18   Leader: 1   Replicas: 1,2,3 Isr: 1,3
>   Topic: yoyoma   Partition: 19   Leader: 1   Replicas: 2,3,1 Isr: 1,3
>   Topic: yoyoma   Partition: 20   Leader: 1   Replicas: 3,1,2 Isr: 1,3
>   Topic: yoyoma   Partition: 21   Leader: 1   Replicas: 1,3,2 Isr: 1,3
>   Topic: yoyoma   Partition: 22   Leader: 1   Replicas: 2,1,3 Isr: 1,3
>   Topic: yoyoma   Partition: 23   Leader: 1   Replicas: 3,2,1 Isr: 1,3
>   Topic: yoyoma   Partition: 24   Leader: 1   Replicas: 1,2,3 Isr: 1,3
>   Topic: yoyoma   Partition: 25   Leader: 1   Replicas: 2,3,1 Isr: 1,3
>   Topic: yoyoma   Partition: 26   Leader: 1   Replicas: 3,1,2 Isr: 1,3
>   Topic: yoyoma   Partition: 27   Leader: 1   Replicas: 1,3,2 Isr: 1,3
>   Topic: yoyoma   Partition: 28   Leader: 1   Replicas: 2,1,3 Isr: 1,3
>   Topic: yoyoma   Partition: 29   Leader: 1   Replicas: 3,2,1 Isr: 1,3
>   Topic: yoyoma   Partition: 30   Leader: 1   Replicas: 1,2,3 Isr: 1,3
>   Topic: yoyoma   Partition: 31   Leader: 1   Replicas: 2,3,1 Isr: 1,3
>   Topic: yoyoma   Partition: 32   Leader: 1   Replicas: 3,1,2 Isr: 1,3
>   Topic: yoyoma   Partition: 33   Leader: 1   Replicas: 1,3,2 Isr: 1,3
>   Topic: yoyoma   Partition: 34   Leader: 1   Replicas: 2,1,3 Isr: 1,3
>   Topic: yoyoma   Partition: 35   Leader: 1   Replicas: 3,2,1 Isr: 1,3
> root@ip-10-233-52-139:/opt/kafka_2.10-0.8.1.1# 
> bin/kafka-preferred-replica-election.sh --zookeeper 10.218.189.234:2181
> Successfully started preferred replica election for partitions 
> Set([yoyoma,29], [yoyoma,14], [yoyoma,22], [yoyoma,15

[jira] [Created] (KAFKA-1825) leadership election state is stale and never recovers without all brokers restarting

2014-12-19 Thread Joe Stein (JIRA)
Joe Stein created KAFKA-1825:


 Summary: leadership election state is stale and never recovers 
without all brokers restarting
 Key: KAFKA-1825
 URL: https://issues.apache.org/jira/browse/KAFKA-1825
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.1.1, 0.8.2
Reporter: Joe Stein
Priority: Critical
 Fix For: 0.8.2


I am not sure what is the cause here but I can succinctly and repeatedly  
reproduce this issue. I tried with 0.8.1.1 and 0.8.2-beta and both behave in 
the same manner.

The code to reproduce this is here 
https://github.com/stealthly/go_kafka_client/tree/wipAsyncSaramaProducer/producers

scenario 3 brokers, 1 zookeeper, 1 client (each AWS c3.2xlarge instances)

create topic 
producer client sends in 380,000 messages/sec (attached executable)

everything is fine until you kill -SIGTERM broker #2 

then at that point the state goes bad for that topic.  even trying to use the 
console producer (with the sarama producer off) doesn't work.

doing a describe the yoyoma topic looks fine, ran prefered leadership election 
lots of issues... still can't produce... only resolution is bouncing all 
brokers :(

root@ip-10-233-52-139:/opt/kafka_2.10-0.8.1.1# bin/kafka-topics.sh --zookeeper 
10.218.189.234:2181 --describe
Topic:yoyomaPartitionCount:36   ReplicationFactor:3 Configs:
Topic: yoyoma   Partition: 0Leader: 1   Replicas: 1,2,3 Isr: 1,3
Topic: yoyoma   Partition: 1Leader: 1   Replicas: 2,3,1 Isr: 1,3
Topic: yoyoma   Partition: 2Leader: 1   Replicas: 3,1,2 Isr: 1,3
Topic: yoyoma   Partition: 3Leader: 1   Replicas: 1,3,2 Isr: 1,3
Topic: yoyoma   Partition: 4Leader: 1   Replicas: 2,1,3 Isr: 1,3
Topic: yoyoma   Partition: 5Leader: 1   Replicas: 3,2,1 Isr: 1,3
Topic: yoyoma   Partition: 6Leader: 1   Replicas: 1,2,3 Isr: 1,3
Topic: yoyoma   Partition: 7Leader: 1   Replicas: 2,3,1 Isr: 1,3
Topic: yoyoma   Partition: 8Leader: 1   Replicas: 3,1,2 Isr: 1,3
Topic: yoyoma   Partition: 9Leader: 1   Replicas: 1,3,2 Isr: 1,3
Topic: yoyoma   Partition: 10   Leader: 1   Replicas: 2,1,3 Isr: 1,3
Topic: yoyoma   Partition: 11   Leader: 1   Replicas: 3,2,1 Isr: 1,3
Topic: yoyoma   Partition: 12   Leader: 1   Replicas: 1,2,3 Isr: 1,3
Topic: yoyoma   Partition: 13   Leader: 1   Replicas: 2,3,1 Isr: 1,3
Topic: yoyoma   Partition: 14   Leader: 1   Replicas: 3,1,2 Isr: 1,3
Topic: yoyoma   Partition: 15   Leader: 1   Replicas: 1,3,2 Isr: 1,3
Topic: yoyoma   Partition: 16   Leader: 1   Replicas: 2,1,3 Isr: 1,3
Topic: yoyoma   Partition: 17   Leader: 1   Replicas: 3,2,1 Isr: 1,3
Topic: yoyoma   Partition: 18   Leader: 1   Replicas: 1,2,3 Isr: 1,3
Topic: yoyoma   Partition: 19   Leader: 1   Replicas: 2,3,1 Isr: 1,3
Topic: yoyoma   Partition: 20   Leader: 1   Replicas: 3,1,2 Isr: 1,3
Topic: yoyoma   Partition: 21   Leader: 1   Replicas: 1,3,2 Isr: 1,3
Topic: yoyoma   Partition: 22   Leader: 1   Replicas: 2,1,3 Isr: 1,3
Topic: yoyoma   Partition: 23   Leader: 1   Replicas: 3,2,1 Isr: 1,3
Topic: yoyoma   Partition: 24   Leader: 1   Replicas: 1,2,3 Isr: 1,3
Topic: yoyoma   Partition: 25   Leader: 1   Replicas: 2,3,1 Isr: 1,3
Topic: yoyoma   Partition: 26   Leader: 1   Replicas: 3,1,2 Isr: 1,3
Topic: yoyoma   Partition: 27   Leader: 1   Replicas: 1,3,2 Isr: 1,3
Topic: yoyoma   Partition: 28   Leader: 1   Replicas: 2,1,3 Isr: 1,3
Topic: yoyoma   Partition: 29   Leader: 1   Replicas: 3,2,1 Isr: 1,3
Topic: yoyoma   Partition: 30   Leader: 1   Replicas: 1,2,3 Isr: 1,3
Topic: yoyoma   Partition: 31   Leader: 1   Replicas: 2,3,1 Isr: 1,3
Topic: yoyoma   Partition: 32   Leader: 1   Replicas: 3,1,2 Isr: 1,3
Topic: yoyoma   Partition: 33   Leader: 1   Replicas: 1,3,2 Isr: 1,3
Topic: yoyoma   Partition: 34   Leader: 1   Replicas: 2,1,3 Isr: 1,3
Topic: yoyoma   Partition: 35   Leader: 1   Replicas: 3,2,1 Isr: 1,3
root@ip-10-233-52-139:/opt/kafka_2.10-0.8.1.1# 
bin/kafka-preferred-replica-election.sh --zookeeper 10.218.189.234:2181
Successfully started preferred replica election for partitions Set([yoyoma,29], 
[yoyoma,14], [yoyoma,22], [yoyoma,15], [yoyoma,3], [yoyoma,11], [yoyoma,32], 
[yoyoma,23], [yoyoma,18], [yoyoma,25], [yoyoma,26], [yoyoma,1], [yoyoma,9], 
[yoyoma,33], [yoyoma,5], [yoyoma,12], [yoyoma,20], [yoyoma,4], [yoyoma,7], 
[yoyoma,24], [yoyoma,35], [yoyoma,10], [yoyoma,8], [yoyoma,2], [yoyoma,21], 
[yoyoma,31], [yoyoma,28], [yoyoma,19], [yoyoma,16], [yoyoma,13], [yoyoma,34], 
[yoyoma,0], [test-1210,0], [yoyoma,30],

[jira] [Updated] (KAFKA-766) Isr shrink/expand check is fragile

2014-12-18 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-766:

Fix Version/s: 0.8.3

> Isr shrink/expand check is fragile
> --
>
> Key: KAFKA-766
> URL: https://issues.apache.org/jira/browse/KAFKA-766
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.0
>Reporter: Sriram Subramanian
>Assignee: Neha Narkhede
> Fix For: 0.8.3
>
>
> Currently the isr check is coupled tightly with the produce batch size. For 
> example, if the producer batch size is 1 messages and isr check is 4000 
> messages, we continuously oscillate between shrinking isr and expanding isr 
> every second. This is because a single produce request throws the replica out 
> of the isr. This results in hundreds of calls to ZK (we still dont have multi 
> write). This can be alleviated by making the producer batch size smaller than 
> the isr check size. 
> Going forward, we should try to not have this coupling. It is worth 
> investigating if we can make the check more robust under such scenarios. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1822) Add "echo" request

2014-12-17 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14249934#comment-14249934
 ] 

Joe Stein commented on KAFKA-1822:
--

I like the idea of this. Instead of calling it Echo can we implement the 
Hearbeat request/response added in KAFKA-1462 for the brokers too?

> Add "echo" request
> --
>
> Key: KAFKA-1822
> URL: https://issues.apache.org/jira/browse/KAFKA-1822
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
> Fix For: 0.8.3
>
>
> Currently there is no simple way to generate a request and validate we 
> receive a response without adding a lot of dependencies for the test.
> Kafka request classes have quite a few dependencies, so they are not really 
> usable when testing infrastructure components or clients.
> Generating a byte-array with meaningless request key id as it is done in 
> SocketServerTest results in unknown request exception that must be handled. 
> I suggest adding an EchoRequest, EchoResponse and EchoHandler. The Request 
> will be the usual header and a bytearray. The Response will be a response 
> header and the same bytearray echoed back.
> Should be useful for client developers and when testing infrastructure 
> changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1822) Add "echo" request

2014-12-17 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1822:
-
  Description: 
Currently there is no simple way to generate a request and validate we receive 
a response without adding a lot of dependencies for the test.
Kafka request classes have quite a few dependencies, so they are not really 
usable when testing infrastructure components or clients.
Generating a byte-array with meaningless request key id as it is done in 
SocketServerTest results in unknown request exception that must be handled. 

I suggest adding an EchoRequest, EchoResponse and EchoHandler. The Request will 
be the usual header and a bytearray. The Response will be a response header and 
the same bytearray echoed back.

Should be useful for client developers and when testing infrastructure changes.



  was:

Currently there is no simple way to generate a request and validate we receive 
a response without adding a lot of dependencies for the test.
Kafka request classes have quite a few dependencies, so they are not really 
usable when testing infrastructure components or clients.
Generating a byte-array with meaningless request key id as it is done in 
SocketServerTest results in unknown request exception that must be handled. 

I suggest adding an EchoRequest, EchoResponse and EchoHandler. The Request will 
be the usual header and a bytearray. The Response will be a response header and 
the same bytearray echoed back.

Should be useful for client developers and when testing infrastructure changes.



Fix Version/s: 0.8.3

> Add "echo" request
> --
>
> Key: KAFKA-1822
> URL: https://issues.apache.org/jira/browse/KAFKA-1822
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
> Fix For: 0.8.3
>
>
> Currently there is no simple way to generate a request and validate we 
> receive a response without adding a lot of dependencies for the test.
> Kafka request classes have quite a few dependencies, so they are not really 
> usable when testing infrastructure components or clients.
> Generating a byte-array with meaningless request key id as it is done in 
> SocketServerTest results in unknown request exception that must be handled. 
> I suggest adding an EchoRequest, EchoResponse and EchoHandler. The Request 
> will be the usual header and a bytearray. The Response will be a response 
> header and the same bytearray echoed back.
> Should be useful for client developers and when testing infrastructure 
> changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1753) add --decommission-broker option

2014-12-17 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14249913#comment-14249913
 ] 

Joe Stein commented on KAFKA-1753:
--

[~Dmitry Pekar] patch this feature option on this ticket. We should see about 
committing the other two fixes you patched to trunk.

> add --decommission-broker option
> 
>
> Key: KAFKA-1753
> URL: https://issues.apache.org/jira/browse/KAFKA-1753
> Project: Kafka
>  Issue Type: Sub-task
>  Components: tools
>Reporter: Dmitry Pekar
>Assignee: Dmitry Pekar
> Fix For: 0.8.3
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1810) Add IP Filtering / Whitelists-Blacklists

2014-12-14 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14245972#comment-14245972
 ] 

Joe Stein commented on KAFKA-1810:
--

+1 I like this approach because you can better manage the brokers to have 
resilience in their network environment for what hosts can connect to them. 
This is an implementation of what KAFKA-1688 will be layering and making 
pluggable. I also see overlap with 
https://issues.apache.org/jira/browse/KAFKA-1786 and might be a good place to 
start building that out too.

> Add IP Filtering / Whitelists-Blacklists 
> -
>
> Key: KAFKA-1810
> URL: https://issues.apache.org/jira/browse/KAFKA-1810
> Project: Kafka
>  Issue Type: New Feature
>  Components: core, network
>Reporter: Jeff Holoman
>Assignee: Jeff Holoman
>Priority: Minor
> Fix For: 0.8.3
>
>
> While longer-term goals of security in Kafka are on the roadmap there exists 
> some value for the ability to restrict connection to Kafka brokers based on 
> IP address. This is not intended as a replacement for security but more of a 
> precaution against misconfiguration and to provide some level of control to 
> Kafka administrators about who is reading/writing to their cluster.
> 1) In some organizations software administration vs o/s systems 
> administration and network administration is disjointed and not well 
> choreographed. Providing software administrators the ability to configure 
> their platform relatively independently (after initial configuration) from 
> Systems administrators is desirable.
> 2) Configuration and deployment is sometimes error prone and there are 
> situations when test environments could erroneously read/write to production 
> environments
> 3) An additional precaution against reading sensitive data is typically 
> welcomed in most large enterprise deployments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1688) Add authorization interface and naive implementation

2014-12-14 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1688:
-
Fix Version/s: 0.8.3
 Assignee: (was: Sriharsha Chintalapani)

> Add authorization interface and naive implementation
> 
>
> Key: KAFKA-1688
> URL: https://issues.apache.org/jira/browse/KAFKA-1688
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Jay Kreps
> Fix For: 0.8.3
>
>
> Add a PermissionManager interface as described here:
> https://cwiki.apache.org/confluence/display/KAFKA/Security
> (possibly there is a better name?)
> Implement calls to the PermissionsManager in KafkaApis for the main requests 
> (FetchRequest, ProduceRequest, etc). We will need to add a new error code and 
> exception to the protocol to indicate "permission denied".
> Add a server configuration to give the class you want to instantiate that 
> implements that interface. That class can define its own configuration 
> properties from the main config file.
> Provide a simple implementation of this interface which just takes a user and 
> ip whitelist and permits those in either of the whitelists to do anything, 
> and denies all others.
> Rather than writing an integration test for this class we can probably just 
> use this class for the TLS and SASL authentication testing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1817) AdminUtils.createTopic vs kafka-topics.sh --create with partitions

2014-12-13 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1817:
-
Issue Type: Sub-task  (was: Bug)
Parent: KAFKA-1694

> AdminUtils.createTopic vs kafka-topics.sh --create with partitions
> --
>
> Key: KAFKA-1817
> URL: https://issues.apache.org/jira/browse/KAFKA-1817
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.8.2
> Environment: debian linux current version  up to date
>Reporter: Jason Kania
> Fix For: 0.8.3
>
>
> When topics are created using AdminUtils.createTopic in code, no partitions 
> folder is created The zookeeper shell shows this.
> ls /brokers/topics/foshizzle
> []
> However, when kafka-topics.sh --create is run, the partitions folder is 
> created:
> ls /brokers/topics/foshizzle
> [partitions]
> The unfortunately useless error message "KeeperErrorCode = NoNode for 
> /brokers/topics/periodicReading/partitions" makes it unclear what to do. When 
> the topics are listed via kafka-topics.sh, they appear to have been created 
> fine. It would be good if the exception was wrapped by Kafka to suggested 
> looking in the zookeeper shell so a person didn't have to dig around to 
> understand what the meaning of this path is...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1817) AdminUtils.createTopic vs kafka-topics.sh --create with partitions

2014-12-13 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1817:
-
Fix Version/s: 0.8.3

I added this as a sub ticket to the command line refactoring so we do this 
there.

> AdminUtils.createTopic vs kafka-topics.sh --create with partitions
> --
>
> Key: KAFKA-1817
> URL: https://issues.apache.org/jira/browse/KAFKA-1817
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.8.2
> Environment: debian linux current version  up to date
>Reporter: Jason Kania
> Fix For: 0.8.3
>
>
> When topics are created using AdminUtils.createTopic in code, no partitions 
> folder is created The zookeeper shell shows this.
> ls /brokers/topics/foshizzle
> []
> However, when kafka-topics.sh --create is run, the partitions folder is 
> created:
> ls /brokers/topics/foshizzle
> [partitions]
> The unfortunately useless error message "KeeperErrorCode = NoNode for 
> /brokers/topics/periodicReading/partitions" makes it unclear what to do. When 
> the topics are listed via kafka-topics.sh, they appear to have been created 
> fine. It would be good if the exception was wrapped by Kafka to suggested 
> looking in the zookeeper shell so a person didn't have to dig around to 
> understand what the meaning of this path is...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1694) kafka command line and centralized operations

2014-12-13 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1694:
-
Description: 

https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Command+Line+and+Related+Improvements

  was:


https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Command+Line+and+Related+Improvements

   Assignee: Andrii Biletskyi

+1 multiple admin request/response messages

> kafka command line and centralized operations
> -
>
> Key: KAFKA-1694
> URL: https://issues.apache.org/jira/browse/KAFKA-1694
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joe Stein
>Assignee: Andrii Biletskyi
>Priority: Critical
> Fix For: 0.8.3
>
> Attachments: KAFKA-1772_1802_1775_1774_v2.patch
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Command+Line+and+Related+Improvements



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1815) ServerShutdownTest fails in trunk.

2014-12-12 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1815:
-
Fix Version/s: 0.8.3

CI says it is still broken https://builds.apache.org/view/All/job/Kafka-trunk/ 
and it was broken for me when I did that commit... I didn't see this ticket 
until just now will look through it later when I have some time towards fixing 
this, I mentioned KAFKA-1650 about it also

> ServerShutdownTest fails in trunk.
> --
>
> Key: KAFKA-1815
> URL: https://issues.apache.org/jira/browse/KAFKA-1815
> Project: Kafka
>  Issue Type: Bug
>Reporter: Anatoly Fayngelerin
>Priority: Minor
> Fix For: 0.8.3
>
> Attachments: shutdown_test_fix.patch
>
>
> I ran into these failures consistently when trying to build Kafka locally:
> kafka.server.ServerShutdownTest > testCleanShutdown FAILED
> java.lang.NullPointerException
> at 
> kafka.server.ServerShutdownTest$$anonfun$verifyNonDaemonThreadsStatus$2.apply(ServerShutdownTest.scala:147)
> at 
> kafka.server.ServerShutdownTest$$anonfun$verifyNonDaemonThreadsStatus$2.apply(ServerShutdownTest.scala:147)
> at 
> scala.collection.TraversableOnce$$anonfun$count$1.apply(TraversableOnce.scala:114)
> at 
> scala.collection.TraversableOnce$$anonfun$count$1.apply(TraversableOnce.scala:113)
> at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
> at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:105)
> at 
> scala.collection.TraversableOnce$class.count(TraversableOnce.scala:113)
> at scala.collection.mutable.ArrayOps$ofRef.count(ArrayOps.scala:105)
> at 
> kafka.server.ServerShutdownTest.verifyNonDaemonThreadsStatus(ServerShutdownTest.scala:147)
> at 
> kafka.server.ServerShutdownTest.testCleanShutdown(ServerShutdownTest.scala:101)
> kafka.server.ServerShutdownTest > testCleanShutdownWithDeleteTopicEnabled 
> FAILED
> java.lang.NullPointerException
> at 
> kafka.server.ServerShutdownTest$$anonfun$verifyNonDaemonThreadsStatus$2.apply(ServerShutdownTest.scala:147)
> at 
> kafka.server.ServerShutdownTest$$anonfun$verifyNonDaemonThreadsStatus$2.apply(ServerShutdownTest.scala:147)
> at 
> scala.collection.TraversableOnce$$anonfun$count$1.apply(TraversableOnce.scala:114)
> at 
> scala.collection.TraversableOnce$$anonfun$count$1.apply(TraversableOnce.scala:113)
> at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
> at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:105)
> at 
> scala.collection.TraversableOnce$class.count(TraversableOnce.scala:113)
> at scala.collection.mutable.ArrayOps$ofRef.count(ArrayOps.scala:105)
> at 
> kafka.server.ServerShutdownTest.verifyNonDaemonThreadsStatus(ServerShutdownTest.scala:147)
> at 
> kafka.server.ServerShutdownTest.testCleanShutdownWithDeleteTopicEnabled(ServerShutdownTest.scala:114)
> kafka.server.ServerShutdownTest > testCleanShutdownAfterFailedStartup FAILED
> java.lang.NullPointerException
> at 
> kafka.server.ServerShutdownTest$$anonfun$verifyNonDaemonThreadsStatus$2.apply(ServerShutdownTest.scala:147)
> at 
> kafka.server.ServerShutdownTest$$anonfun$verifyNonDaemonThreadsStatus$2.apply(ServerShutdownTest.scala:147)
> at 
> scala.collection.TraversableOnce$$anonfun$count$1.apply(TraversableOnce.scala:114)
> at 
> scala.collection.TraversableOnce$$anonfun$count$1.apply(TraversableOnce.scala:113)
> at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
> at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:105)
> at 
> scala.collection.TraversableOnce$class.count(TraversableOnce.scala:113)
> at scala.collection.mutable.ArrayOps$ofRef.count(ArrayOps.scala:105)
> at 
> kafka.server.ServerShutdownTest.verifyNonDaemonThreadsStatus(ServerShutdownTest.scala:147)
> at 
> kafka.server.ServerShutdownTest.testCleanShutdownAfterFailedStartup(ServerShutdownTest.scala:141)
> It looks like Jenkins also had issues with these tests:
> https://builds.apache.org/job/Kafka-trunk/351/console
> I would like to provide a patch that fixes this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1812) Allow IpV6 in configuration with parseCsvMap

2014-12-12 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1812:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

+ 1 committed to trunk, thanks for the patch Jeff and the review Gwen!

>  Allow IpV6 in configuration with parseCsvMap
> -
>
> Key: KAFKA-1812
> URL: https://issues.apache.org/jira/browse/KAFKA-1812
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jeff Holoman
>Assignee: Jeff Holoman
>Priority: Minor
>  Labels: newbie
> Fix For: 0.8.3
>
> Attachments: KAFKA-1812_2014-12-10_21:38:59.patch
>
>
> The current implementation of parseCsvMap in Utils expects k:v,k:v. This 
> modifies that function to accept a string with multiple ":" characters and 
> splitting on the last occurrence per pair. 
> This limitation is noted in the Reviewboard comments for KAFKA-1512



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-742) Existing directories under the Kafka data directory without any data cause process to not start

2014-12-11 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-742:

Fix Version/s: 0.8.3

> Existing directories under the Kafka data directory without any data cause 
> process to not start
> ---
>
> Key: KAFKA-742
> URL: https://issues.apache.org/jira/browse/KAFKA-742
> Project: Kafka
>  Issue Type: Bug
>  Components: config
>Affects Versions: 0.8.0
>Reporter: Chris Curtin
>Assignee: Ashish Kumar Singh
> Fix For: 0.8.3
>
>
> I incorrectly setup the configuration file to have the metrics go to 
> /var/kafka/metrics while the logs were in /var/kafka. On startup I received 
> the following error then the daemon exited:
> 30   [main] INFO  kafka.log.LogManager  - [Log Manager on Broker 0] Loading 
> log 'metrics'
> 32   [main] FATAL kafka.server.KafkaServerStartable  - Fatal error during 
> KafkaServerStable startup. Prepare to shutdown
> java.lang.StringIndexOutOfBoundsException: String index out of range: -1
> at java.lang.String.substring(String.java:1937)
> at 
> kafka.log.LogManager.kafka$log$LogManager$$parseTopicPartitionName(LogManager.scala:335)
> at 
> kafka.log.LogManager$$anonfun$loadLogs$1$$anonfun$apply$3.apply(LogManager.scala:112)
> at 
> kafka.log.LogManager$$anonfun$loadLogs$1$$anonfun$apply$3.apply(LogManager.scala:109)
> at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34)
> at scala.collection.mutable.ArrayOps.foreach(ArrayOps.scala:34)
> at 
> kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:109)
> at 
> kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:101)
> at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34)
> at 
> scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:32)
> at kafka.log.LogManager.loadLogs(LogManager.scala:101)
> at kafka.log.LogManager.(LogManager.scala:62)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:59)
> at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:34)
> at kafka.Kafka$.main(Kafka.scala:46)
> at kafka.Kafka.main(Kafka.scala)
> 34   [main] INFO  kafka.server.KafkaServer  - [Kafka Server 0], shutting down
> This was on a brand new cluster so no data or metrics logs existed yet.
> Moving the metrics to their own directory (not a child of the logs) allowed 
> the daemon to start.
> Took a few minutes to figure out what was wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1650) Mirror Maker could lose data on unclean shutdown.

2014-12-11 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14243423#comment-14243423
 ] 

Joe Stein commented on KAFKA-1650:
--

I am getting a local failure running ./gradlew test 

kafka.server.ServerShutdownTest > testCleanShutdownAfterFailedStartup FAILED
java.lang.NullPointerException
at 
kafka.server.ServerShutdownTest$$anonfun$verifyNonDaemonThreadsStatus$2.apply(ServerShutdownTest.scala:147)
at 
kafka.server.ServerShutdownTest$$anonfun$verifyNonDaemonThreadsStatus$2.apply(ServerShutdownTest.scala:147)
at 
scala.collection.TraversableOnce$$anonfun$count$1.apply(TraversableOnce.scala:114)
at 
scala.collection.TraversableOnce$$anonfun$count$1.apply(TraversableOnce.scala:113)
at 
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
at 
scala.collection.TraversableOnce$class.count(TraversableOnce.scala:113)
at scala.collection.mutable.ArrayOps$ofRef.count(ArrayOps.scala:108)
at 
kafka.server.ServerShutdownTest.verifyNonDaemonThreadsStatus(ServerShutdownTest.scala:147)
at 
kafka.server.ServerShutdownTest.testCleanShutdownAfterFailedStartup(ServerShutdownTest.scala:141)

kafka.server.ServerShutdownTest > testCleanShutdownWithDeleteTopicEnabled FAILED
java.lang.NullPointerException
at 
kafka.server.ServerShutdownTest$$anonfun$verifyNonDaemonThreadsStatus$2.apply(ServerShutdownTest.scala:147)
at 
kafka.server.ServerShutdownTest$$anonfun$verifyNonDaemonThreadsStatus$2.apply(ServerShutdownTest.scala:147)
at 
scala.collection.TraversableOnce$$anonfun$count$1.apply(TraversableOnce.scala:114)
at 
scala.collection.TraversableOnce$$anonfun$count$1.apply(TraversableOnce.scala:113)
at 
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
at 
scala.collection.TraversableOnce$class.count(TraversableOnce.scala:113)
at scala.collection.mutable.ArrayOps$ofRef.count(ArrayOps.scala:108)
at 
kafka.server.ServerShutdownTest.verifyNonDaemonThreadsStatus(ServerShutdownTest.scala:147)
at 
kafka.server.ServerShutdownTest.testCleanShutdownWithDeleteTopicEnabled(ServerShutdownTest.scala:114)

kafka.server.ServerShutdownTest > testConsecutiveShutdown PASSED

kafka.server.ServerShutdownTest > testCleanShutdown FAILED
java.lang.NullPointerException
at 
kafka.server.ServerShutdownTest$$anonfun$verifyNonDaemonThreadsStatus$2.apply(ServerShutdownTest.scala:147)
at 
kafka.server.ServerShutdownTest$$anonfun$verifyNonDaemonThreadsStatus$2.apply(ServerShutdownTest.scala:147)
at 
scala.collection.TraversableOnce$$anonfun$count$1.apply(TraversableOnce.scala:114)
at 
scala.collection.TraversableOnce$$anonfun$count$1.apply(TraversableOnce.scala:113)
at 
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
at 
scala.collection.TraversableOnce$class.count(TraversableOnce.scala:113)
at scala.collection.mutable.ArrayOps$ofRef.count(ArrayOps.scala:108)
at 
kafka.server.ServerShutdownTest.verifyNonDaemonThreadsStatus(ServerShutdownTest.scala:147)
at 
kafka.server.ServerShutdownTest.testCleanShutdown(ServerShutdownTest.scala:101)

and the CI is broken too just ran another one just now to triple check 
https://builds.apache.org/view/All/job/Kafka-trunk/352/

I am a bit lost on this ticket it looks like the code is committed to trunk 
(commit 2801629964882015a9148e1c0ade22da46376faa) but this JIRA doesn't have 
resolved or which fix version (and more patches after commit) and tests are 
failing [~guozhang] can you take a look please (it looks like your commit)

> Mirror Maker could lose data on unclean shutdown.
> -
>
> Key: KAFKA-1650
> URL: https://issues.apache.org/jira/browse/KAFKA-1650
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Attachments: KAFKA-1650.patch, KAFKA-1650_2014-10-06_10:17:46.patch, 
> KAFKA-1650_2014-11-12_09:51:30.patch, KAFKA-1650_2014-11-17_18:44:37.patch, 
> KAFKA-1650_2014-11-20_12:00:16.patch, KAFKA-1650_2014-11-24_08:15:17.patch, 
> KAFKA-1650_2014-12-03_15:02:31.patch, KAFKA-1650_2014-12-03_19:02:13.patch, 
> KAFKA-1650_2014-12-04_11:59:07.patch, KAFKA-1650_2014-12-06_18:58:57.patch, 
> KAFKA-1650_2014-12-08_01:36:01.patch
>
>
> Currently if mirror maker got shutdown uncleanly, the data in the data 
> channel and buffer could potentially be lost. With the new producer's 
> callback, this issue 

[jira] [Updated] (KAFKA-1812) Allow IpV6 in configuration with parseCsvMap

2014-12-11 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1812:
-
Reviewer: Gwen Shapira

>  Allow IpV6 in configuration with parseCsvMap
> -
>
> Key: KAFKA-1812
> URL: https://issues.apache.org/jira/browse/KAFKA-1812
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jeff Holoman
>Assignee: Jeff Holoman
>Priority: Minor
>  Labels: newbie
> Fix For: 0.8.3
>
> Attachments: KAFKA-1812_2014-12-10_21:38:59.patch
>
>
> The current implementation of parseCsvMap in Utils expects k:v,k:v. This 
> modifies that function to accept a string with multiple ":" characters and 
> splitting on the last occurrence per pair. 
> This limitation is noted in the Reviewboard comments for KAFKA-1512



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1207) Launch Kafka from within Apache Mesos

2014-12-10 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14241143#comment-14241143
 ] 

Joe Stein commented on KAFKA-1207:
--

Hey [~jayson.minard] we have gone back and forth the last year between "build a 
scheduler" just for Kafka or "build an executor layer that works in 
Marathon/Aurora". What we did first was give Aurora a shot since it already has 
an executor (Thermus) and see about  getting Kafka to run there. That script is 
here https://github.com/stealthly/borealis/blob/master/scripts/kafka.aurora for 
doing what we did. It relied on an undocumented feature in Aurora that we used 
which Bill Farner talked about here when I spoke with him on a podcast 
http://allthingshadoop.com/2014/10/26/resource-scheduling-and-task-launching-with-apache-mesos-and-apache-aurora-at-twitter/

Anyways, there were/are issues with that implementation so we decided then to 
give Marathon https://mesosphere.github.io/marathon/docs/ a try. We started off 
with this code as a pattern to use 
https://github.com/brndnmtthws/kafka-on-marathon and so far it is working out 
great. It definitely added more work on our side but it is running and doing 
exactly what we expect.

We have been speaking with others about this too and think we could come up 
with a standalone scheduler that would work out of the box. I don't know if it 
makes sense though for that to be a JVM process though. We were thinking of 
writing it in Go. One *VERY* important reason to have another shell launching 
Kafka is because you want to be able to change scripts and bounce brokers (you 
kind of have to do this) and if you rolling restart or something your tasks 
Mesos will schedule them to wherever it wants. Some Kafka improvements are 
coming that mitigate that some 
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Command+Line+and+Related+Improvements
 but I don't think it would ever be 100% (Kafka is not like Storm or Spark in 
how it runs). On the Mesos side you can manage this with roles and constraints 
but at the end of the day you are dealing with a *persistent* server. The way 
we have gotten around this is using the shell script as an agent that can fetch 
the updates configs and do restart of the process, etc, etc, etc. There is new 
feature coming out in Mesos https://issues.apache.org/jira/browse/MESOS-1554 
that will make this better however I still like the supervisor shell script 
strategy ... we could morph the supervisor shell script strategy as a custom 
scheduler/executor (framework) for Kafka (absolutely) but I am not sure if the 
project would accept Go code for this feature or not?  I would be +1 on it 
going in and have a few engineers available to work on it over the next 1-2 
months. We could also write the whole thing in Java or Scala too though I still 
don't know if that is going to make it any easier/better to support in the 
community vs Go.

Would love more thoughts and discussions on this here.

> Launch Kafka from within Apache Mesos
> -
>
> Key: KAFKA-1207
> URL: https://issues.apache.org/jira/browse/KAFKA-1207
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joe Stein
>  Labels: mesos
> Fix For: 0.9.0
>
> Attachments: KAFKA-1207.patch, KAFKA-1207_2014-01-19_00:04:58.patch, 
> KAFKA-1207_2014-01-19_00:48:49.patch
>
>
> There are a few components to this.
> 1) The Framework:  This is going to be responsible for starting up and 
> managing the fail over of brokers within the mesos cluster.  This will have 
> to get some Kafka focused paramaters for launching new replica brokers, 
> moving topics and partitions around based on what is happening in the grid 
> through time.
> 2) The Scheduler: This is what is going to ask for resources for Kafka 
> brokers (new ones, replacement ones, commissioned ones) and other operations 
> such as stopping tasks (decommissioning brokers).  I think this should also 
> expose a user interface (or at least a rest api) for producers and consumers 
> so we can have producers and consumers run inside of the mesos cluster if 
> folks want (just add the jar)
> 3) The Executor : This is the task launcher.  It launches tasks kills them 
> off.
> 4) Sharing data between Scheduler and Executor: I looked at the a few 
> implementations of this.  I like parts of the Storm implementation but think 
> using the environment variable 
> ExectorInfo.CommandInfo.Enviornment.Variables[] is the best shot.  We can 
> have a command line bin/kafka-mesos-scheduler-start.sh that would build the 
> contrib project if not already built and support conf/server.properties to 
> start.
> The Framework and operating Scheduler would run in on an administrative node. 
>  I am probably going to hook Apache Curator into it so it can do it's own 
> failure to a anothe

[jira] [Commented] (KAFKA-1813) Build fails for scala 2.9.2

2014-12-09 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14239567#comment-14239567
 ] 

Joe Stein commented on KAFKA-1813:
--

Did you run the new bootstrap we added to the README?

> Build fails for scala 2.9.2
> ---
>
> Key: KAFKA-1813
> URL: https://issues.apache.org/jira/browse/KAFKA-1813
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Reporter: Anatoly Fayngelerin
>Priority: Minor
> Attachments: fix_2_9_2_build.patch
>
>
> Currently, in trunk, the 2.9.2 build fails with the following error:
> MirrorMaker.scala:507 overloaded method value commitOffsets with alternatives:
>   (isAutoCommit: Boolean,topicPartitionOffsets: 
> scala.collection.immutable.Map[kafka.common.TopicAndPartition,kafka.common.OffsetAndMetadata])Unit
>  
>   (isAutoCommit: Boolean)Unit 
>   => Unit
>  cannot be applied to (isAutoCommit: Boolean, 
> scala.collection.immutable.Map[kafka.common.TopicAndPartition,kafka.common.OffsetAndMetadata])
> connector.commitOffsets(isAutoCommit = false, offsetsToCommit)
> It looks like the 2.9.2 compiler cannot resolve an overloaded method when 
> mixing named and ordered parameters.
> I ran into this when I cloned the repo and ran ./gradlew test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1173) Using Vagrant to get up and running with Apache Kafka

2014-12-05 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1173:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

committed to trunk

> Using Vagrant to get up and running with Apache Kafka
> -
>
> Key: KAFKA-1173
> URL: https://issues.apache.org/jira/browse/KAFKA-1173
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Joe Stein
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.8.3
>
> Attachments: KAFKA-1173.patch, KAFKA-1173_2013-12-07_12:07:55.patch, 
> KAFKA-1173_2014-11-11_13:50:55.patch, KAFKA-1173_2014-11-12_11:32:09.patch, 
> KAFKA-1173_2014-11-18_16:01:33.patch
>
>
> Vagrant has been getting a lot of pickup in the tech communities.  I have 
> found it very useful for development and testing and working with a few 
> clients now using it to help virtualize their environments in repeatable ways.
> Using Vagrant to get up and running.
> For 0.8.0 I have a patch on github https://github.com/stealthly/kafka
> 1) Install Vagrant [http://www.vagrantup.com/](http://www.vagrantup.com/)
> 2) Install Virtual Box 
> [https://www.virtualbox.org/](https://www.virtualbox.org/)
> In the main kafka folder
> 1) ./sbt update
> 2) ./sbt package
> 3) ./sbt assembly-package-dependency
> 4) vagrant up
> once this is done 
> * Zookeeper will be running 192.168.50.5
> * Broker 1 on 192.168.50.10
> * Broker 2 on 192.168.50.20
> * Broker 3 on 192.168.50.30
> When you are all up and running you will be back at a command brompt.  
> If you want you can login to the machines using vagrant shh  but 
> you don't need to.
> You can access the brokers and zookeeper by their IP
> e.g.
> bin/kafka-console-producer.sh --broker-list 
> 192.168.50.10:9092,192.168.50.20:9092,192.168.50.30:9092 --topic sandbox
> bin/kafka-console-consumer.sh --zookeeper 192.168.50.5:2181 --topic sandbox 
> --from-beginning



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1173) Using Vagrant to get up and running with Apache Kafka

2014-12-04 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14234816#comment-14234816
 ] 

Joe Stein commented on KAFKA-1173:
--

[~ewencp] fell off my radar some will circle back on this next day or so and 
catch back up

> Using Vagrant to get up and running with Apache Kafka
> -
>
> Key: KAFKA-1173
> URL: https://issues.apache.org/jira/browse/KAFKA-1173
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Joe Stein
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.8.3
>
> Attachments: KAFKA-1173.patch, KAFKA-1173_2013-12-07_12:07:55.patch, 
> KAFKA-1173_2014-11-11_13:50:55.patch, KAFKA-1173_2014-11-12_11:32:09.patch, 
> KAFKA-1173_2014-11-18_16:01:33.patch
>
>
> Vagrant has been getting a lot of pickup in the tech communities.  I have 
> found it very useful for development and testing and working with a few 
> clients now using it to help virtualize their environments in repeatable ways.
> Using Vagrant to get up and running.
> For 0.8.0 I have a patch on github https://github.com/stealthly/kafka
> 1) Install Vagrant [http://www.vagrantup.com/](http://www.vagrantup.com/)
> 2) Install Virtual Box 
> [https://www.virtualbox.org/](https://www.virtualbox.org/)
> In the main kafka folder
> 1) ./sbt update
> 2) ./sbt package
> 3) ./sbt assembly-package-dependency
> 4) vagrant up
> once this is done 
> * Zookeeper will be running 192.168.50.5
> * Broker 1 on 192.168.50.10
> * Broker 2 on 192.168.50.20
> * Broker 3 on 192.168.50.30
> When you are all up and running you will be back at a command brompt.  
> If you want you can login to the machines using vagrant shh  but 
> you don't need to.
> You can access the brokers and zookeeper by their IP
> e.g.
> bin/kafka-console-producer.sh --broker-list 
> 192.168.50.10:9092,192.168.50.20:9092,192.168.50.30:9092 --topic sandbox
> bin/kafka-console-consumer.sh --zookeeper 192.168.50.5:2181 --topic sandbox 
> --from-beginning



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1753) add --decommission-broker option

2014-12-01 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14230239#comment-14230239
 ] 

Joe Stein commented on KAFKA-1753:
--

So, I think with the changes in KAFKA-1792 this now becomes the same type of 
algo but the partitions on the broker being decommissioned are what we are 
evenly spreading to the rest of the cluster nodes essentially.

> add --decommission-broker option
> 
>
> Key: KAFKA-1753
> URL: https://issues.apache.org/jira/browse/KAFKA-1753
> Project: Kafka
>  Issue Type: Sub-task
>  Components: tools
>Reporter: Dmitry Pekar
>Assignee: Dmitry Pekar
> Fix For: 0.8.3
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1802) Add a new type of request for the discovery of the controller

2014-12-01 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14230037#comment-14230037
 ] 

Joe Stein commented on KAFKA-1802:
--

lets not update "A guide to the kafka protocol" until everything is baked in 
and committed (that is final resting place) but do please update 
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Command+Line+and+Related+Improvements
 so discussion of all message changes/additions for this can happen there.

> Add a new type of request for the discovery of the controller
> -
>
> Key: KAFKA-1802
> URL: https://issues.apache.org/jira/browse/KAFKA-1802
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Joe Stein
> Fix For: 0.8.3
>
>
> The goal here is like meta data discovery is for producer so CLI can find 
> which broker it should send the rest of its admin requests too.  Any broker 
> can respond to this specific AdminMeta RQ/RP but only the controller broker 
> should be responding to Admin message otherwise that broker should respond to 
> any admin message with the response for what the controller is.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1772) Add an Admin message type for request response

2014-12-01 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14229799#comment-14229799
 ] 

Joe Stein commented on KAFKA-1772:
--

1) agreed
2) I think we can do away with the "format" field. If we use JSON or by 
structures it will be one or the other and not both and structured so no need 
to have which as an option.
3) Yes, lets document the message format(s) in the main section 
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Command+Line+and+Related+Improvements
4) We definitely need to support response right away and polling for status. It 
might be best to just do this for all of the commands and never block at the 
broker and keep a single pattern at the wire. The layer above the calls will 
definitely want/need a --sync type option which should be completely in the CLI 
(IMHO) to loop the check status (every few ms (configurable overridden)) and in 
some cases (like topic related) to be default because that is experience now. 
We could also have some be blocking some not.

> Add an Admin message type for request response
> --
>
> Key: KAFKA-1772
> URL: https://issues.apache.org/jira/browse/KAFKA-1772
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Joe Stein
>Assignee: Andrii Biletskyi
> Fix For: 0.8.3
>
> Attachments: KAFKA-1772.patch
>
>
> - utility int8
> - command int8
> - format int8
> - args variable length bytes
> utility 
> 0 - Broker
> 1 - Topic
> 2 - Replication
> 3 - Controller
> 4 - Consumer
> 5 - Producer
> Command
> 0 - Create
> 1 - Alter
> 3 - Delete
> 4 - List
> 5 - Audit
> format
> 0 - JSON
> args e.g. (which would equate to the data structure values == 2,1,0)
> "meta-store": {
> {"zookeeper":"localhost:12913/kafka"}
> }"args": {
>  "partitions":
>   [
> {"topic": "topic1", "partition": "0"},
> {"topic": "topic1", "partition": "1"},
> {"topic": "topic1", "partition": "2"},
>  
> {"topic": "topic2", "partition": "0"},
> {"topic": "topic2", "partition": "1"},
>   ]
> }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-1802) Add a new type of request for the discovery of the controller

2014-11-28 Thread Joe Stein (JIRA)
Joe Stein created KAFKA-1802:


 Summary: Add a new type of request for the discovery of the 
controller
 Key: KAFKA-1802
 URL: https://issues.apache.org/jira/browse/KAFKA-1802
 Project: Kafka
  Issue Type: Sub-task
Reporter: Joe Stein
 Fix For: 0.8.3


The goal here is like meta data discovery is for producer so CLI can find which 
broker it should send the rest of its admin requests too.  Any broker can 
respond to this specific AdminMeta RQ/RP but only the controller broker should 
be responding to Admin message otherwise that broker should respond to any 
admin message with the response for what the controller is.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1781) Readme should specify that Gradle 2.0 is required for initial bootstrap

2014-11-28 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14228633#comment-14228633
 ] 

Joe Stein commented on KAFKA-1781:
--

+1 to double commit KAFKA-1624 and add the gradle version requirement for JDK 8 
in README

> Readme should specify that Gradle 2.0 is required for initial bootstrap
> ---
>
> Key: KAFKA-1781
> URL: https://issues.apache.org/jira/browse/KAFKA-1781
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.8.2
>Reporter: Jean-Francois Im
>Priority: Blocker
> Fix For: 0.8.2
>
> Attachments: gradle-2.0-readme.patch
>
>
> Current README.md says "You need to have gradle installed."
> As the bootstrap procedure doesn't work with gradle 1.12, this needs to say 
> that 2.0 or greater is needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1461) Replica fetcher thread does not implement any back-off behavior

2014-11-25 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14225166#comment-14225166
 ] 

Joe Stein commented on KAFKA-1461:
--

[~n...@museglobal.ro] are you working on this patch? If not can we assign it to 
unassigned so if someone wants to jump in and fix it, sure is annoying when it 
happens (like waiting on Recovering unflushed segment ) during that time every 
replica fetching from it spews the error ERROR kafka.server.ReplicaFetcherThread

> Replica fetcher thread does not implement any back-off behavior
> ---
>
> Key: KAFKA-1461
> URL: https://issues.apache.org/jira/browse/KAFKA-1461
> Project: Kafka
>  Issue Type: Improvement
>  Components: replication
>Affects Versions: 0.8.1.1
>Reporter: Sam Meder
>Assignee: nicu marasoiu
>  Labels: newbie++
> Fix For: 0.8.3
>
>
> The current replica fetcher thread will retry in a tight loop if any error 
> occurs during the fetch call. For example, we've seen cases where the fetch 
> continuously throws a connection refused exception leading to several replica 
> fetcher threads that spin in a pretty tight loop.
> To a much lesser degree this is also an issue in the consumer fetcher thread, 
> although the fact that erroring partitions are removed so a leader can be 
> re-discovered helps some.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1461) Replica fetcher thread does not implement any back-off behavior

2014-11-25 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1461:
-
Fix Version/s: 0.8.3

> Replica fetcher thread does not implement any back-off behavior
> ---
>
> Key: KAFKA-1461
> URL: https://issues.apache.org/jira/browse/KAFKA-1461
> Project: Kafka
>  Issue Type: Improvement
>  Components: replication
>Affects Versions: 0.8.1.1
>Reporter: Sam Meder
>Assignee: nicu marasoiu
>  Labels: newbie++
> Fix For: 0.8.3
>
>
> The current replica fetcher thread will retry in a tight loop if any error 
> occurs during the fetch call. For example, we've seen cases where the fetch 
> continuously throws a connection refused exception leading to several replica 
> fetcher threads that spin in a pretty tight loop.
> To a much lesser degree this is also an issue in the consumer fetcher thread, 
> although the fact that erroring partitions are removed so a leader can be 
> re-discovered helps some.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1173) Using Vagrant to get up and running with Apache Kafka

2014-11-21 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14221118#comment-14221118
 ] 

Joe Stein commented on KAFKA-1173:
--

[~ewencp] I think we both agree we are not trying to deal with every use case 
just making sure folks don't have a bad experience.  Let me go through your 
latest patch and should be in a place I can commit when i get some time for 
spin ups/down and such again.  Thanks!

> Using Vagrant to get up and running with Apache Kafka
> -
>
> Key: KAFKA-1173
> URL: https://issues.apache.org/jira/browse/KAFKA-1173
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Joe Stein
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.8.3
>
> Attachments: KAFKA-1173.patch, KAFKA-1173_2013-12-07_12:07:55.patch, 
> KAFKA-1173_2014-11-11_13:50:55.patch, KAFKA-1173_2014-11-12_11:32:09.patch, 
> KAFKA-1173_2014-11-18_16:01:33.patch
>
>
> Vagrant has been getting a lot of pickup in the tech communities.  I have 
> found it very useful for development and testing and working with a few 
> clients now using it to help virtualize their environments in repeatable ways.
> Using Vagrant to get up and running.
> For 0.8.0 I have a patch on github https://github.com/stealthly/kafka
> 1) Install Vagrant [http://www.vagrantup.com/](http://www.vagrantup.com/)
> 2) Install Virtual Box 
> [https://www.virtualbox.org/](https://www.virtualbox.org/)
> In the main kafka folder
> 1) ./sbt update
> 2) ./sbt package
> 3) ./sbt assembly-package-dependency
> 4) vagrant up
> once this is done 
> * Zookeeper will be running 192.168.50.5
> * Broker 1 on 192.168.50.10
> * Broker 2 on 192.168.50.20
> * Broker 3 on 192.168.50.30
> When you are all up and running you will be back at a command brompt.  
> If you want you can login to the machines using vagrant shh  but 
> you don't need to.
> You can access the brokers and zookeeper by their IP
> e.g.
> bin/kafka-console-producer.sh --broker-list 
> 192.168.50.10:9092,192.168.50.20:9092,192.168.50.30:9092 --topic sandbox
> bin/kafka-console-consumer.sh --zookeeper 192.168.50.5:2181 --topic sandbox 
> --from-beginning



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1790) Remote controlled shutdown was removed

2014-11-20 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1790:
-
Fix Version/s: 0.8.2

> Remote controlled shutdown was removed
> --
>
> Key: KAFKA-1790
> URL: https://issues.apache.org/jira/browse/KAFKA-1790
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2
>Reporter: James Oliver
>Assignee: James Oliver
>Priority: Blocker
> Fix For: 0.8.2
>
>
> In core:
> kafka.admin.ShutdownBroker was removed, rendering remote controlled shutdowns 
> impossible. 
> A Kafka administrator needs to be able to perform a controlled shutdown 
> without issuing a SIGTERM/SIGKILL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1786) implement a global configuration feature for brokers

2014-11-19 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1786:
-
Description: 
Global level configurations (much like topic level) for brokers are managed by 
humans and automation systems through server.properties.  

Some configuration make sense to use default (like it is now) or override from 
central location (zookeeper for now). We can modify this through the new CLI 
tool so that every broker can have exact same setting.  Some configurations we 
should allow to be overriden from server.properties (like port) but others we 
should use the global store as source of truth (e.g. auto topic enable, fetch 
replica message size, etc). Since most configuration I believe are going to 
fall into this category we should have the list of server.properties that can 
override the global config in the code in a list which we can manage... 
everything else the global takes precedence. 

  was:
Global level configurations (much like topic level) for brokers are managed by 
humans and automation systems through server.properties.  

Some configuration make sense to use default (like it is now) or override from 
central location (zookeeper for now). We can modify this through the new CLI 
tool so that every broker can have exact same setting.  Some configurations we 
should allow to be overriden from server.properties (like port) but others we 
should use the global store as source of truth (e.g. auto topic enable, fetch 
replica message size, etc).


> implement a global configuration feature for brokers
> 
>
> Key: KAFKA-1786
> URL: https://issues.apache.org/jira/browse/KAFKA-1786
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Joe Stein
> Fix For: 0.8.3
>
>
> Global level configurations (much like topic level) for brokers are managed 
> by humans and automation systems through server.properties.  
> Some configuration make sense to use default (like it is now) or override 
> from central location (zookeeper for now). We can modify this through the new 
> CLI tool so that every broker can have exact same setting.  Some 
> configurations we should allow to be overriden from server.properties (like 
> port) but others we should use the global store as source of truth (e.g. auto 
> topic enable, fetch replica message size, etc). Since most configuration I 
> believe are going to fall into this category we should have the list of 
> server.properties that can override the global config in the code in a list 
> which we can manage... everything else the global takes precedence. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-1786) implement a global configuration feature for brokers

2014-11-19 Thread Joe Stein (JIRA)
Joe Stein created KAFKA-1786:


 Summary: implement a global configuration feature for brokers
 Key: KAFKA-1786
 URL: https://issues.apache.org/jira/browse/KAFKA-1786
 Project: Kafka
  Issue Type: Sub-task
Reporter: Joe Stein
 Fix For: 0.8.3


Global level configurations (much like topic level) for brokers are managed by 
humans and automation systems through server.properties.  

Some configuration make sense to use default (like it is now) or override from 
central location (zookeeper for now). We can modify this through the new CLI 
tool so that every broker can have exact same setting.  Some configurations we 
should allow to be overriden from server.properties (like port) but others we 
should use the global store as source of truth (e.g. auto topic enable, fetch 
replica message size, etc).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1125) Add options to let admin tools block until finish

2014-11-19 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1125:
-
Fix Version/s: 0.8.3

> Add options to let admin tools block until finish
> -
>
> Key: KAFKA-1125
> URL: https://issues.apache.org/jira/browse/KAFKA-1125
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.1
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
> Fix For: 0.8.3
>
>
> Topic config change as well as create-topic, add-partition, 
> partition-reassignment and preferred leader election are all asynchronous in 
> the sense that the admin command would return immediately and one has to 
> check himself if the process has finished. It is better to add an option to 
> make these commands blocking until the process is done.
> Also, it would be good to order admin tasks in order so that they can be 
> executed sequentially in logic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1774) REPL and Shell Client for Admin Message RQ/RP

2014-11-19 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14218157#comment-14218157
 ] 

Joe Stein commented on KAFKA-1774:
--

some thoughts

1) I think that /clients make sense since this is an admin client.
2) JLine I have heard good things but no personal experience with issues on it
3) If we do this in Java then we won't have the issue with Scala binaries and 
client use and easier for java interactions, not sure how much that will matter 
in this case though since scripts will just use the CLI  and not even care what 
it is in If we do it in Scala then my answer for #1 might change or we have 
to just put scala code in that directory which may not make sense. My 
preference is Scala over Java.
4) We have talked about that in 
https://issues.apache.org/jira/browse/KAFKA-1595 some, we should drive 
consistency if we can. I have had really good experience with 
http://wiki.fasterxml.com/JacksonHome in both Java and Scala



> REPL and Shell Client for Admin Message RQ/RP
> -
>
> Key: KAFKA-1774
> URL: https://issues.apache.org/jira/browse/KAFKA-1774
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Joe Stein
>Assignee: Andrii Biletskyi
> Fix For: 0.8.3
>
>
> We should have a REPL we can work in and execute the commands with the 
> arguments. With this we can do:
> ./kafka.sh --shell 
> kafka>attach cluster -b localhost:9092;
> kafka>describe topic sampleTopicNameForExample;
> the command line version can work like it does now so folks don't have to 
> re-write all of their tooling.
> kafka.sh --topics --everything the same like kafka-topics.sh is 
> kafka.sh --reassign --everything the same like kafka-reassign-partitions.sh 
> is 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1595) Remove deprecated and slower scala JSON parser from kafka.consumer.TopicCount

2014-11-19 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1595:
-
Fix Version/s: 0.8.3

> Remove deprecated and slower scala JSON parser from kafka.consumer.TopicCount
> -
>
> Key: KAFKA-1595
> URL: https://issues.apache.org/jira/browse/KAFKA-1595
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.1.1
>Reporter: Jagbir
>  Labels: newbie
> Fix For: 0.8.3
>
>
> The following issue is created as a follow up suggested by Jun Rao
> in a kafka news group message with the Subject
> "Blocking Recursive parsing from 
> kafka.consumer.TopicCount$.constructTopicCount"
> SUMMARY:
> An issue was detected in a typical cluster of 3 kafka instances backed
> by 3 zookeeper instances (kafka version 0.8.1.1, scala version 2.10.3,
> java version 1.7.0_65). On consumer end, when consumers get recycled,
> there is a troubling JSON parsing recursion which takes a busy lock and
> blocks consumers thread pool.
> In 0.8.1.1 scala client library ZookeeperConsumerConnector.scala:355 takes
> a global lock (0xd3a7e1d0) during the rebalance, and fires an
> expensive JSON parsing, while keeping the other consumers from shutting
> down, see, e.g,
> at 
> kafka.consumer.ZookeeperConsumerConnector.shutdown(ZookeeperConsumerConnector.scala:161)
> The deep recursive JSON parsing should be deprecated in favor
> of a better JSON parser, see, e.g,
> http://engineering.ooyala.com/blog/comparing-scala-json-libraries?
> DETAILS:
> The first dump is for a recursive blocking thread holding the lock for 
> 0xd3a7e1d0
> and the subsequent dump is for a waiting thread.
> (Please grep for 0xd3a7e1d0 to see the locked object.)
> Â 
> -8<-
> "Sa863f22b1e5hjh6788991800900b34545c_profile-a-prod1-s-140789080845312-c397945e8_watcher_executor"
> prio=10 tid=0x7f24dc285800 nid=0xda9 runnable [0x7f249e40b000]
> java.lang.Thread.State: RUNNABLE
> at 
> scala.util.parsing.combinator.Parsers$$anonfun$rep1$1.p$7(Parsers.scala:722)
> at 
> scala.util.parsing.combinator.Parsers$$anonfun$rep1$1.continue$1(Parsers.scala:726)
> at 
> scala.util.parsing.combinator.Parsers$$anonfun$rep1$1.apply(Parsers.scala:737)
> at 
> scala.util.parsing.combinator.Parsers$$anonfun$rep1$1.apply(Parsers.scala:721)
> at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
> at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
> at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
> at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
> at 
> scala.util.parsing.combinator.Parsers$Success.flatMapWithNext(Parsers.scala:142)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$flatMap$1.apply(Parsers.scala:239)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$flatMap$1.apply(Parsers.scala:239)
> at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$flatMap$1.apply(Parsers.scala:239)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$flatMap$1.apply(Parsers.scala:239)
> at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
> at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
> at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
> at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
> at 
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
> at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
> at 
> scala.util.parsing.combinator.Parsers$Parser$

[jira] [Commented] (KAFKA-1173) Using Vagrant to get up and running with Apache Kafka

2014-11-17 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14215675#comment-14215675
 ] 

Joe Stein commented on KAFKA-1173:
--

[~ewencp] changing override.hostmanager.ignore_private_ip = false in the AWS 
section didn't work :( The host manager is setting the hosts to the 192 address 
cat /etc/hosts ## vagrant-hostmanager-start
192.168.50.51   broker1 
192.168.50.52   broker2 
192.168.50.53   broker3 
192.168.50.11   zk1 

The virtual box parts are great I think for folks to jump in and get up and 
running quickly using vagrant and it is helpful for development and works 
without futzing with it, yup. One option is we could commit that part and move 
the AWS pieces to another ticket. I don't mind that but I am ok with helping to 
keep testing the EC2 parts as long as it can work for folks out the box with 
little issues/steps as our end game. I should have a chance to try this all 
again and/or review whatever changes on Wednesday & Thursdasy (FYI gotta knock 
off for the evening and tomorrow is packed). Many folks have VPC we should try 
to accommodate them otherwise it just looks like Kafka isn't working (or is 
harder than it really is to setup). We already get a lot of emails about EC2 
and advertising hosts and everything else so this could be really helpful for 
folks once it is working more. 

<< The Spark EC2 scripts do a nice job of just setting up a usable default so 
it's really easy to get up and running, but I'm also hesitant to have the 
script automatically muck with users' security settings.

You can make it a flag to use it and have some detail about changing the flag 
and what is going to happen if you do (so it is not automatic).  I think if 
things are going to work then folks can make the decision themselves if the 
impact of that is something that is worth it for them. I think having it just 
not work though without having to-do a lot kind of takes away from the "spin up 
something working" aspect to the change.

We will also find out a lot more and learn what the community wants more and/or 
differently as this gets in and out to the world.


> Using Vagrant to get up and running with Apache Kafka
> -
>
> Key: KAFKA-1173
> URL: https://issues.apache.org/jira/browse/KAFKA-1173
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Joe Stein
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.8.3
>
> Attachments: KAFKA-1173.patch, KAFKA-1173_2013-12-07_12:07:55.patch, 
> KAFKA-1173_2014-11-11_13:50:55.patch, KAFKA-1173_2014-11-12_11:32:09.patch
>
>
> Vagrant has been getting a lot of pickup in the tech communities.  I have 
> found it very useful for development and testing and working with a few 
> clients now using it to help virtualize their environments in repeatable ways.
> Using Vagrant to get up and running.
> For 0.8.0 I have a patch on github https://github.com/stealthly/kafka
> 1) Install Vagrant [http://www.vagrantup.com/](http://www.vagrantup.com/)
> 2) Install Virtual Box 
> [https://www.virtualbox.org/](https://www.virtualbox.org/)
> In the main kafka folder
> 1) ./sbt update
> 2) ./sbt package
> 3) ./sbt assembly-package-dependency
> 4) vagrant up
> once this is done 
> * Zookeeper will be running 192.168.50.5
> * Broker 1 on 192.168.50.10
> * Broker 2 on 192.168.50.20
> * Broker 3 on 192.168.50.30
> When you are all up and running you will be back at a command brompt.  
> If you want you can login to the machines using vagrant shh  but 
> you don't need to.
> You can access the brokers and zookeeper by their IP
> e.g.
> bin/kafka-console-producer.sh --broker-list 
> 192.168.50.10:9092,192.168.50.20:9092,192.168.50.30:9092 --topic sandbox
> bin/kafka-console-consumer.sh --zookeeper 192.168.50.5:2181 --topic sandbox 
> --from-beginning



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1173) Using Vagrant to get up and running with Apache Kafka

2014-11-17 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14215653#comment-14215653
 ] 

Joe Stein commented on KAFKA-1173:
--

[~ewencp] I agree no more toggles unless required. I am looking at this from 
the community perspective and thinking that once we commit this we have to 
support it and all the questions/issues people are going to have and will come 
over the mailing list (etc). 

I flipped ignore_private_ip to false in the aws section of the vagrant file and 
giving it another try.

> Using Vagrant to get up and running with Apache Kafka
> -
>
> Key: KAFKA-1173
> URL: https://issues.apache.org/jira/browse/KAFKA-1173
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Joe Stein
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.8.3
>
> Attachments: KAFKA-1173.patch, KAFKA-1173_2013-12-07_12:07:55.patch, 
> KAFKA-1173_2014-11-11_13:50:55.patch, KAFKA-1173_2014-11-12_11:32:09.patch
>
>
> Vagrant has been getting a lot of pickup in the tech communities.  I have 
> found it very useful for development and testing and working with a few 
> clients now using it to help virtualize their environments in repeatable ways.
> Using Vagrant to get up and running.
> For 0.8.0 I have a patch on github https://github.com/stealthly/kafka
> 1) Install Vagrant [http://www.vagrantup.com/](http://www.vagrantup.com/)
> 2) Install Virtual Box 
> [https://www.virtualbox.org/](https://www.virtualbox.org/)
> In the main kafka folder
> 1) ./sbt update
> 2) ./sbt package
> 3) ./sbt assembly-package-dependency
> 4) vagrant up
> once this is done 
> * Zookeeper will be running 192.168.50.5
> * Broker 1 on 192.168.50.10
> * Broker 2 on 192.168.50.20
> * Broker 3 on 192.168.50.30
> When you are all up and running you will be back at a command brompt.  
> If you want you can login to the machines using vagrant shh  but 
> you don't need to.
> You can access the brokers and zookeeper by their IP
> e.g.
> bin/kafka-console-producer.sh --broker-list 
> 192.168.50.10:9092,192.168.50.20:9092,192.168.50.30:9092 --topic sandbox
> bin/kafka-console-consumer.sh --zookeeper 192.168.50.5:2181 --topic sandbox 
> --from-beginning



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-1173) Using Vagrant to get up and running with Apache Kafka

2014-11-17 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14215589#comment-14215589
 ] 

Joe Stein edited comment on KAFKA-1173 at 11/18/14 1:47 AM:


[~ewencp] The aws parts are further along and servers are spinning up with the 
code and configs installed but still getting issue.

All of the DNS hosts are on the public IP but my default security group is only 
setup for 22 on the outside and only the internal security group for inside.  
Can this change to use the private IP instead of the public address?

should I have set
ec2_associate_public_ip = false

I ran vagrant outside the VPC but maybe I should change it to false though the 
comment above that line is the reason I didn't?

Maybe something else but right now the servers can only communicate with each 
on the internal private IP but hosts spin up on public ip (however we fix that) 
thanks!



was (Author: joestein):
[~ewencp] The aws parts are further along and servers are spinning up with the 
code and configs installed but still getting issue.

All of the DNS hosts are on the public IP but my default security group is only 
setup for 22 on the outside and only the internal security group for inside.  
Can this change to use the private IP instead of the public address?

should I have set
ec2_associate_public_ip = false

I ran vagrant outside the VPC but maybe I should change it to false though the 
comment above that line is the reason I didn't?


> Using Vagrant to get up and running with Apache Kafka
> -
>
> Key: KAFKA-1173
> URL: https://issues.apache.org/jira/browse/KAFKA-1173
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Joe Stein
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.8.3
>
> Attachments: KAFKA-1173.patch, KAFKA-1173_2013-12-07_12:07:55.patch, 
> KAFKA-1173_2014-11-11_13:50:55.patch, KAFKA-1173_2014-11-12_11:32:09.patch
>
>
> Vagrant has been getting a lot of pickup in the tech communities.  I have 
> found it very useful for development and testing and working with a few 
> clients now using it to help virtualize their environments in repeatable ways.
> Using Vagrant to get up and running.
> For 0.8.0 I have a patch on github https://github.com/stealthly/kafka
> 1) Install Vagrant [http://www.vagrantup.com/](http://www.vagrantup.com/)
> 2) Install Virtual Box 
> [https://www.virtualbox.org/](https://www.virtualbox.org/)
> In the main kafka folder
> 1) ./sbt update
> 2) ./sbt package
> 3) ./sbt assembly-package-dependency
> 4) vagrant up
> once this is done 
> * Zookeeper will be running 192.168.50.5
> * Broker 1 on 192.168.50.10
> * Broker 2 on 192.168.50.20
> * Broker 3 on 192.168.50.30
> When you are all up and running you will be back at a command brompt.  
> If you want you can login to the machines using vagrant shh  but 
> you don't need to.
> You can access the brokers and zookeeper by their IP
> e.g.
> bin/kafka-console-producer.sh --broker-list 
> 192.168.50.10:9092,192.168.50.20:9092,192.168.50.30:9092 --topic sandbox
> bin/kafka-console-consumer.sh --zookeeper 192.168.50.5:2181 --topic sandbox 
> --from-beginning



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1173) Using Vagrant to get up and running with Apache Kafka

2014-11-17 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14215589#comment-14215589
 ] 

Joe Stein commented on KAFKA-1173:
--

[~ewencp] The aws parts are further along and servers are spinning up with the 
code and configs installed but still getting issue.

All of the DNS hosts are on the public IP but my default security group is only 
setup for 22 on the outside and only the internal security group for inside.  
Can this change to use the private IP instead of the public address?

should I have set
ec2_associate_public_ip = false

I ran vagrant outside the VPC but maybe I should change it to false though the 
comment above that line is the reason I didn't?


> Using Vagrant to get up and running with Apache Kafka
> -
>
> Key: KAFKA-1173
> URL: https://issues.apache.org/jira/browse/KAFKA-1173
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Joe Stein
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.8.3
>
> Attachments: KAFKA-1173.patch, KAFKA-1173_2013-12-07_12:07:55.patch, 
> KAFKA-1173_2014-11-11_13:50:55.patch, KAFKA-1173_2014-11-12_11:32:09.patch
>
>
> Vagrant has been getting a lot of pickup in the tech communities.  I have 
> found it very useful for development and testing and working with a few 
> clients now using it to help virtualize their environments in repeatable ways.
> Using Vagrant to get up and running.
> For 0.8.0 I have a patch on github https://github.com/stealthly/kafka
> 1) Install Vagrant [http://www.vagrantup.com/](http://www.vagrantup.com/)
> 2) Install Virtual Box 
> [https://www.virtualbox.org/](https://www.virtualbox.org/)
> In the main kafka folder
> 1) ./sbt update
> 2) ./sbt package
> 3) ./sbt assembly-package-dependency
> 4) vagrant up
> once this is done 
> * Zookeeper will be running 192.168.50.5
> * Broker 1 on 192.168.50.10
> * Broker 2 on 192.168.50.20
> * Broker 3 on 192.168.50.30
> When you are all up and running you will be back at a command brompt.  
> If you want you can login to the machines using vagrant shh  but 
> you don't need to.
> You can access the brokers and zookeeper by their IP
> e.g.
> bin/kafka-console-producer.sh --broker-list 
> 192.168.50.10:9092,192.168.50.20:9092,192.168.50.30:9092 --topic sandbox
> bin/kafka-console-consumer.sh --zookeeper 192.168.50.5:2181 --topic sandbox 
> --from-beginning



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1173) Using Vagrant to get up and running with Apache Kafka

2014-11-17 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1173:
-
Fix Version/s: 0.8.3

> Using Vagrant to get up and running with Apache Kafka
> -
>
> Key: KAFKA-1173
> URL: https://issues.apache.org/jira/browse/KAFKA-1173
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Joe Stein
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.8.3
>
> Attachments: KAFKA-1173.patch, KAFKA-1173_2013-12-07_12:07:55.patch, 
> KAFKA-1173_2014-11-11_13:50:55.patch, KAFKA-1173_2014-11-12_11:32:09.patch
>
>
> Vagrant has been getting a lot of pickup in the tech communities.  I have 
> found it very useful for development and testing and working with a few 
> clients now using it to help virtualize their environments in repeatable ways.
> Using Vagrant to get up and running.
> For 0.8.0 I have a patch on github https://github.com/stealthly/kafka
> 1) Install Vagrant [http://www.vagrantup.com/](http://www.vagrantup.com/)
> 2) Install Virtual Box 
> [https://www.virtualbox.org/](https://www.virtualbox.org/)
> In the main kafka folder
> 1) ./sbt update
> 2) ./sbt package
> 3) ./sbt assembly-package-dependency
> 4) vagrant up
> once this is done 
> * Zookeeper will be running 192.168.50.5
> * Broker 1 on 192.168.50.10
> * Broker 2 on 192.168.50.20
> * Broker 3 on 192.168.50.30
> When you are all up and running you will be back at a command brompt.  
> If you want you can login to the machines using vagrant shh  but 
> you don't need to.
> You can access the brokers and zookeeper by their IP
> e.g.
> bin/kafka-console-producer.sh --broker-list 
> 192.168.50.10:9092,192.168.50.20:9092,192.168.50.30:9092 --topic sandbox
> bin/kafka-console-consumer.sh --zookeeper 192.168.50.5:2181 --topic sandbox 
> --from-beginning



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1173) Using Vagrant to get up and running with Apache Kafka

2014-11-17 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14215523#comment-14215523
 ] 

Joe Stein commented on KAFKA-1173:
--

ignore last issue  I quoted everything and spinning up now

> Using Vagrant to get up and running with Apache Kafka
> -
>
> Key: KAFKA-1173
> URL: https://issues.apache.org/jira/browse/KAFKA-1173
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Joe Stein
>Assignee: Ewen Cheslack-Postava
> Attachments: KAFKA-1173.patch, KAFKA-1173_2013-12-07_12:07:55.patch, 
> KAFKA-1173_2014-11-11_13:50:55.patch, KAFKA-1173_2014-11-12_11:32:09.patch
>
>
> Vagrant has been getting a lot of pickup in the tech communities.  I have 
> found it very useful for development and testing and working with a few 
> clients now using it to help virtualize their environments in repeatable ways.
> Using Vagrant to get up and running.
> For 0.8.0 I have a patch on github https://github.com/stealthly/kafka
> 1) Install Vagrant [http://www.vagrantup.com/](http://www.vagrantup.com/)
> 2) Install Virtual Box 
> [https://www.virtualbox.org/](https://www.virtualbox.org/)
> In the main kafka folder
> 1) ./sbt update
> 2) ./sbt package
> 3) ./sbt assembly-package-dependency
> 4) vagrant up
> once this is done 
> * Zookeeper will be running 192.168.50.5
> * Broker 1 on 192.168.50.10
> * Broker 2 on 192.168.50.20
> * Broker 3 on 192.168.50.30
> When you are all up and running you will be back at a command brompt.  
> If you want you can login to the machines using vagrant shh  but 
> you don't need to.
> You can access the brokers and zookeeper by their IP
> e.g.
> bin/kafka-console-producer.sh --broker-list 
> 192.168.50.10:9092,192.168.50.20:9092,192.168.50.30:9092 --topic sandbox
> bin/kafka-console-consumer.sh --zookeeper 192.168.50.5:2181 --topic sandbox 
> --from-beginning



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1173) Using Vagrant to get up and running with Apache Kafka

2014-11-17 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14215515#comment-14215515
 ] 

Joe Stein commented on KAFKA-1173:
--

FYI - trying a new AWS secret key but got error because I had a "+" in my key 
that AWS generated

There is a syntax error in the following Vagrantfile. The syntax error
message is reproduced below for convenience:

Vagrantfile.local:3: syntax error, unexpected tIDENTIFIER, expecting 
end-of-input

I am going to generate a new key and continue to test

> Using Vagrant to get up and running with Apache Kafka
> -
>
> Key: KAFKA-1173
> URL: https://issues.apache.org/jira/browse/KAFKA-1173
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Joe Stein
>Assignee: Ewen Cheslack-Postava
> Attachments: KAFKA-1173.patch, KAFKA-1173_2013-12-07_12:07:55.patch, 
> KAFKA-1173_2014-11-11_13:50:55.patch, KAFKA-1173_2014-11-12_11:32:09.patch
>
>
> Vagrant has been getting a lot of pickup in the tech communities.  I have 
> found it very useful for development and testing and working with a few 
> clients now using it to help virtualize their environments in repeatable ways.
> Using Vagrant to get up and running.
> For 0.8.0 I have a patch on github https://github.com/stealthly/kafka
> 1) Install Vagrant [http://www.vagrantup.com/](http://www.vagrantup.com/)
> 2) Install Virtual Box 
> [https://www.virtualbox.org/](https://www.virtualbox.org/)
> In the main kafka folder
> 1) ./sbt update
> 2) ./sbt package
> 3) ./sbt assembly-package-dependency
> 4) vagrant up
> once this is done 
> * Zookeeper will be running 192.168.50.5
> * Broker 1 on 192.168.50.10
> * Broker 2 on 192.168.50.20
> * Broker 3 on 192.168.50.30
> When you are all up and running you will be back at a command brompt.  
> If you want you can login to the machines using vagrant shh  but 
> you don't need to.
> You can access the brokers and zookeeper by their IP
> e.g.
> bin/kafka-console-producer.sh --broker-list 
> 192.168.50.10:9092,192.168.50.20:9092,192.168.50.30:9092 --topic sandbox
> bin/kafka-console-consumer.sh --zookeeper 192.168.50.5:2181 --topic sandbox 
> --from-beginning



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1781) Readme should specify that Gradle 2.0 is required for initial bootstrap

2014-11-17 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14215040#comment-14215040
 ] 

Joe Stein commented on KAFKA-1781:
--

JDK 8 related support information is captured here KAFKA-1624

> Readme should specify that Gradle 2.0 is required for initial bootstrap
> ---
>
> Key: KAFKA-1781
> URL: https://issues.apache.org/jira/browse/KAFKA-1781
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.8.2
>Reporter: Jean-Francois Im
>Priority: Blocker
> Fix For: 0.8.2
>
> Attachments: gradle-2.0-readme.patch
>
>
> Current README.md says "You need to have gradle installed."
> As the bootstrap procedure doesn't work with gradle 1.12, this needs to say 
> that 2.0 or greater is needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1781) Readme should specify that Gradle 2.0 is required for initial bootstrap

2014-11-17 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14215030#comment-14215030
 ] 

Joe Stein commented on KAFKA-1781:
--

I think your issue is related to your JVM being 8 instead of 7 which has some 
more info here KAFKA-1624 and not gradle versions 

> Readme should specify that Gradle 2.0 is required for initial bootstrap
> ---
>
> Key: KAFKA-1781
> URL: https://issues.apache.org/jira/browse/KAFKA-1781
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.8.2
>Reporter: Jean-Francois Im
>Priority: Blocker
> Fix For: 0.8.2
>
> Attachments: gradle-2.0-readme.patch
>
>
> Current README.md says "You need to have gradle installed."
> As the bootstrap procedure doesn't work with gradle 1.12, this needs to say 
> that 2.0 or greater is needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1781) Readme should specify that Gradle 2.0 is required for initial bootstrap

2014-11-17 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14214978#comment-14214978
 ] 

Joe Stein commented on KAFKA-1781:
--

That is weird, I tried from a fresh clone just now

{code}

new-host:apache_kafka joestein$ git clone 
https://git-wip-us.apache.org/repos/asf/kafka.git KAFKA-1781
Cloning into 'KAFKA-1781'...
remote: Counting objects: 21794, done.
remote: Compressing objects: 100% (7216/7216), done.
remote: Total 21794 (delta 12923), reused 19669 (delta 11330)
Receiving objects: 100% (21794/21794), 15.18 MiB | 623 KiB/s, done.
Resolving deltas: 100% (12923/12923), done.
new-host:apache_kafka joestein$ cd KAFKA-1781/
new-host:KAFKA-1781 joestein$ git checkout -b 0.8.2 origin/0.8.2
Branch 0.8.2 set up to track remote branch 0.8.2 from origin.
Switched to a new branch '0.8.2'
new-host:KAFKA-1781 joestein$ gradle --version


Gradle 1.8


Build time:   2013-09-24 07:32:33 UTC
Build number: none
Revision: 7970ec3503b4f5767ee1c1c69f8b4186c4763e3d

Groovy:   1.8.6
Ant:  Apache Ant(TM) version 1.9.2 compiled on July 8 2013
Ivy:  2.2.0
JVM:  1.7.0_25 (Oracle Corporation 23.25-b01)
OS:   Mac OS X 10.8.5 x86_64

new-host:KAFKA-1781 joestein$ gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/1.8/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.1
:downloadWrapper

BUILD SUCCESSFUL

Total time: 18.37 secs
new-host:KAFKA-1781 joestein$ ./gradlew jar
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.0/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.1
:clients:compileJava
Download 
http://repo1.maven.org/maven2/org/xerial/snappy/snappy-java/1.1.1.6/snappy-java-1.1.1.6.pom
Download 
http://repo1.maven.org/maven2/org/xerial/snappy/snappy-java/1.1.1.6/snappy-java-1.1.1.6.jar
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
:clients:processResources UP-TO-DATE
:clients:classes
:clients:jar
:contrib:compileJava UP-TO-DATE
:contrib:processResources UP-TO-DATE
:contrib:classes UP-TO-DATE
:contrib:jar
:core:compileJava UP-TO-DATE
:core:compileScala
/opt/apache_kafka/KAFKA-1781/core/src/main/scala/kafka/admin/AdminUtils.scala:259:
 non-variable type argument String in type pattern 
scala.collection.Map[String,_] is unchecked since it is eliminated by erasure
case Some(map: Map[String, _]) => 
   ^
/opt/apache_kafka/KAFKA-1781/core/src/main/scala/kafka/admin/AdminUtils.scala:262:
 non-variable type argument String in type pattern 
scala.collection.Map[String,String] is unchecked since it is eliminated by 
erasure
case Some(config: Map[String, String]) =>
  ^
/opt/apache_kafka/KAFKA-1781/core/src/main/scala/kafka/server/KafkaServer.scala:168:
 a pure expression does nothing in statement position; you may be omitting 
necessary parentheses
ControllerStats.uncleanLeaderElectionRate
^
/opt/apache_kafka/KAFKA-1781/core/src/main/scala/kafka/server/KafkaServer.scala:169:
 a pure expression does nothing in statement position; you may be omitting 
necessary parentheses
ControllerStats.leaderElectionTimer
^
/opt/apache_kafka/KAFKA-1781/core/src/main/scala/kafka/utils/Utils.scala:81: a 
pure expression does nothing in statement position; you may be omitting 
necessary parentheses
daemonThread(name, runnable(fun))
^
/opt/apache_kafka/KAFKA-1781/core/src/main/scala/kafka/network/SocketServer.scala:361:
 Visited SCOPE_EXIT before visiting corresponding SCOPE_ENTER. SI-6049
  maybeCloseOldestConnection
  ^
/opt/apache_kafka/KAFKA-1781/core/src/main/scala/kafka/network/SocketServer.scala:381:
 Visited SCOPE_EXIT before visiting corresponding SCOPE_ENTER. SI-6049
  try {
  ^
there were 12 feature warning(s); re-run with -feature for details
8 warnings found
:core:processResources UP-TO-DATE
:core:classes
:core:copyDependantLibs
:core:jar
:examples:compileJava
:examples:processResources UP-TO-DATE
:examples:classes
:examples:jar
:contrib:hadoop-consumer:compileJava
Note: Some input files use or override a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
:contrib:hadoop-consumer:processResources UP-TO-DATE
:contrib:hadoop-consumer:classes
:contrib:hadoop-consumer:jar
:contrib:hadoop-producer:compileJava
:contrib:hadoop-producer:processResources UP-TO-DATE
:contrib:hadoop-producer:classes
:contrib:had

[jira] [Comment Edited] (KAFKA-1624) building on JDK 8 fails

2014-11-17 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14214956#comment-14214956
 ] 

Joe Stein edited comment on KAFKA-1624 at 11/17/14 6:30 PM:


<< I did some tests locally with various Scala versions. Only the default 
2.10.1 seems not compile with Java 8; 2.10.2, 2.10.3 and 2.11 are all 
compatible with it. Shall we change the default version of Scala to at least 
2.10.2?

[~guozhang] Thanks for testing the versions out. Your suggestions makes sense 
to me, folks are going to keep bringing this up more and more moving forward 
and no reason to make them keep making a minor change we can ship in 0.8.2 
final (i think it would be ok to-do it there)

Do we want to go with 2.10.3 instead of 2.10.2 since it is later version? 

Any one else issues with doing this for 0.8.2?


was (Author: joestein):
<< I did some tests locally with various Scala versions. Only the default 
2.10.1 seems not compile with Java 8; 2.10.2, 2.10.3 and 2.11 are all 
compatible with it. Shall we change the default version of Scala to at least 
2.10.2?

[~guozhang] Thanks for testing the version out. Your suggestions makes sense to 
me, folks are going to keep bringing this up more and more moving forward and 
no reason to make them keep making a minor change we can ship in 0.8.2 final (i 
think it would be ok to-do it there)

Do we want to go with 2.10.3 instead of 2.10.2 since it is later version? 

Any one else issues with doing this for 0.8.2?

> building on JDK 8 fails
> ---
>
> Key: KAFKA-1624
> URL: https://issues.apache.org/jira/browse/KAFKA-1624
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joe Stein
>  Labels: newbie
> Fix For: 0.8.2
>
>
> {code}
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; 
> support was removed in 8.0
> error: error while loading CharSequence, class file 
> '/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar(java/lang/CharSequence.class)' is 
> broken
> (class java.lang.RuntimeException/bad constant pool tag 18 at byte 10)
> error: error while loading Comparator, class file 
> '/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar(java/util/Comparator.class)' is 
> broken
> (class java.lang.RuntimeException/bad constant pool tag 18 at byte 20)
> error: error while loading AnnotatedElement, class file 
> '/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar(java/lang/reflect/AnnotatedElement.class)'
>  is broken
> (class java.lang.RuntimeException/bad constant pool tag 18 at byte 76)
> error: error while loading Arrays, class file 
> '/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar(java/util/Arrays.class)' is broken
> (class java.lang.RuntimeException/bad constant pool tag 18 at byte 765)
> /tmp/sbt_53783b12/xsbt/ExtractAPI.scala:395: error: java.util.Comparator does 
> not take type parameters
>   private[this] val sortClasses = new Comparator[Symbol] {
> ^
> 5 errors found
> :core:compileScala FAILED
> FAILURE: Build failed with an exception.
> * What went wrong:
> Execution failed for task ':core:compileScala'.
> > org.gradle.messaging.remote.internal.PlaceholderException (no error message)
> * Try:
> Run with --stacktrace option to get the stack trace. Run with --info or 
> --debug option to get more log output.
> BUILD FAILED
> Total time: 1 mins 48.298 secs
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-1624) building on JDK 8 fails

2014-11-17 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14214956#comment-14214956
 ] 

Joe Stein edited comment on KAFKA-1624 at 11/17/14 6:30 PM:


<< I did some tests locally with various Scala versions. Only the default 
2.10.1 seems not compile with Java 8; 2.10.2, 2.10.3 and 2.11 are all 
compatible with it. Shall we change the default version of Scala to at least 
2.10.2?

[~guozhang] Thanks for testing the version out. Your suggestions makes sense to 
me, folks are going to keep bringing this up more and more moving forward and 
no reason to make them keep making a minor change we can ship in 0.8.2 final (i 
think it would be ok to-do it there)

Do we want to go with 2.10.3 instead of 2.10.2 since it is later version? 

Any one else issues with doing this for 0.8.2?


was (Author: joestein):
<< I did some tests locally with various Scala versions. Only the default 
2.10.1 seems not compile with Java 8; 2.10.2, 2.10.3 and 2.11 are all 
compatible with it. Shall we change the default version of Scala to at least 
2.10.2?

That makes sense to me, folks are going to keep bringing this up more and more 
moving forward and no reason to make them keep making a minor change we can 
ship in 0.8.2 final (i think it would be ok to-do it there)

Do we want to go with 2.10.3 instead of 2.10.2 since it is later version? 

Any one else issues with doing this for 0.8.2?

> building on JDK 8 fails
> ---
>
> Key: KAFKA-1624
> URL: https://issues.apache.org/jira/browse/KAFKA-1624
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joe Stein
>  Labels: newbie
> Fix For: 0.8.2
>
>
> {code}
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; 
> support was removed in 8.0
> error: error while loading CharSequence, class file 
> '/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar(java/lang/CharSequence.class)' is 
> broken
> (class java.lang.RuntimeException/bad constant pool tag 18 at byte 10)
> error: error while loading Comparator, class file 
> '/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar(java/util/Comparator.class)' is 
> broken
> (class java.lang.RuntimeException/bad constant pool tag 18 at byte 20)
> error: error while loading AnnotatedElement, class file 
> '/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar(java/lang/reflect/AnnotatedElement.class)'
>  is broken
> (class java.lang.RuntimeException/bad constant pool tag 18 at byte 76)
> error: error while loading Arrays, class file 
> '/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar(java/util/Arrays.class)' is broken
> (class java.lang.RuntimeException/bad constant pool tag 18 at byte 765)
> /tmp/sbt_53783b12/xsbt/ExtractAPI.scala:395: error: java.util.Comparator does 
> not take type parameters
>   private[this] val sortClasses = new Comparator[Symbol] {
> ^
> 5 errors found
> :core:compileScala FAILED
> FAILURE: Build failed with an exception.
> * What went wrong:
> Execution failed for task ':core:compileScala'.
> > org.gradle.messaging.remote.internal.PlaceholderException (no error message)
> * Try:
> Run with --stacktrace option to get the stack trace. Run with --info or 
> --debug option to get more log output.
> BUILD FAILED
> Total time: 1 mins 48.298 secs
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1624) building on JDK 8 fails

2014-11-17 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14214956#comment-14214956
 ] 

Joe Stein commented on KAFKA-1624:
--

<< I did some tests locally with various Scala versions. Only the default 
2.10.1 seems not compile with Java 8; 2.10.2, 2.10.3 and 2.11 are all 
compatible with it. Shall we change the default version of Scala to at least 
2.10.2?

That makes sense to me, folks are going to keep bringing this up more and more 
moving forward and no reason to make them keep making a minor change we can 
ship in 0.8.2 final (i think it would be ok to-do it there)

Do we want to go with 2.10.3 instead of 2.10.2 since it is later version? 

Any one else issues with doing this for 0.8.2?

> building on JDK 8 fails
> ---
>
> Key: KAFKA-1624
> URL: https://issues.apache.org/jira/browse/KAFKA-1624
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joe Stein
>  Labels: newbie
> Fix For: 0.8.2
>
>
> {code}
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; 
> support was removed in 8.0
> error: error while loading CharSequence, class file 
> '/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar(java/lang/CharSequence.class)' is 
> broken
> (class java.lang.RuntimeException/bad constant pool tag 18 at byte 10)
> error: error while loading Comparator, class file 
> '/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar(java/util/Comparator.class)' is 
> broken
> (class java.lang.RuntimeException/bad constant pool tag 18 at byte 20)
> error: error while loading AnnotatedElement, class file 
> '/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar(java/lang/reflect/AnnotatedElement.class)'
>  is broken
> (class java.lang.RuntimeException/bad constant pool tag 18 at byte 76)
> error: error while loading Arrays, class file 
> '/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar(java/util/Arrays.class)' is broken
> (class java.lang.RuntimeException/bad constant pool tag 18 at byte 765)
> /tmp/sbt_53783b12/xsbt/ExtractAPI.scala:395: error: java.util.Comparator does 
> not take type parameters
>   private[this] val sortClasses = new Comparator[Symbol] {
> ^
> 5 errors found
> :core:compileScala FAILED
> FAILURE: Build failed with an exception.
> * What went wrong:
> Execution failed for task ':core:compileScala'.
> > org.gradle.messaging.remote.internal.PlaceholderException (no error message)
> * Try:
> Run with --stacktrace option to get the stack trace. Run with --info or 
> --debug option to get more log output.
> BUILD FAILED
> Total time: 1 mins 48.298 secs
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


<    1   2   3   4   5   6   7   8   9   10   >