[jira] [Updated] (KAFKA-1861) Publishing kafka-client:test in order to utilize the helper utils in TestUtils

2015-01-21 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy updated KAFKA-1861:
---
Attachment: KAFKA-1861.patch

 Publishing kafka-client:test in order to utilize the helper utils in TestUtils
 --

 Key: KAFKA-1861
 URL: https://issues.apache.org/jira/browse/KAFKA-1861
 Project: Kafka
  Issue Type: Bug
Reporter: Navina Ramesh
 Attachments: KAFKA-1861.patch


 Related to SAMZA-227 (Upgrade KafkaSystemProducer to new java-based Kafka API)
 Turns out that some of the utilities that are helpful in writing unit tests 
 are available in org.apache.kafka.test.TestUtils.java (:kafka-clients). This 
 is not published to maven repository. Hence, we are forced to reproduce the 
 same code in samza. This can be avoided if the test package is published to 
 the Maven repo.
 For example, we are creating a customize MockProducer to be used in Samza 
 unit-tests and access to these quick helper utils will be useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] 0.8.2.0 Candidate 2 (with the correct links)

2015-01-21 Thread Jason Rosenberg
For the maven artifacts listed above, there doesn't seem to be any
distinction between rc1 or rc2, so is it assumed that this is release2
here?:

https://repository.apache.org/content/groups/staging/org/apache/kafka/kafka_2.11/0.8.2.0/

Thanks!

Jason

On Wed, Jan 21, 2015 at 8:28 AM, Jun Rao j...@confluent.io wrote:

 This is the second candidate for release of Apache Kafka 0.8.2.0. There has
 been some changes since the 0.8.2 beta release, especially in the new java
 producer api and jmx mbean names. It would be great if people can test this
 out thoroughly.

 Release Notes for the 0.8.2.0 release

 https://people.apache.org/~junrao/kafka-0.8.2.0-candidate2/RELEASE_NOTES.html

 *** Please download, test and vote by Monday, Jan 26h, 7pm PT

 Kafka's KEYS file containing PGP keys we use to sign the release:
 http://kafka.apache.org/KEYS in addition to the md5, sha1 and sha2
 (SHA256)
 checksum.

 * Release artifacts to be voted upon (source and binary):
 https://people.apache.org/~junrao/kafka-0.8.2.0-candidate2/

 * Maven artifacts to be voted upon prior to release:
 https://repository.apache.org/content/groups/staging/

 * scala-doc
 https://people.apache.org/~junrao/kafka-0.8.2.0-candidate2/scaladoc/

 * java-doc
 https://people.apache.org/~junrao/kafka-0.8.2.0-candidate2/javadoc/

 * The tag to be voted upon (off the 0.8.2 branch) is the 0.8.2.0 tag

 https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=058d58adef2ab2787e49d8efeefd61bb3d32f99c
 (commit 0b312a6b9f0833d38eec434bfff4c647c1814564)

 /***

 Thanks,

 Jun



[jira] [Updated] (KAFKA-1889) Refactor shell wrapper scripts

2015-01-21 Thread Francois Saint-Jacques (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francois Saint-Jacques updated KAFKA-1889:
--
Attachment: refactor-scripts-v1.patch

 Refactor shell wrapper scripts
 --

 Key: KAFKA-1889
 URL: https://issues.apache.org/jira/browse/KAFKA-1889
 Project: Kafka
  Issue Type: Improvement
  Components: packaging
Reporter: Francois Saint-Jacques
Assignee: Francois Saint-Jacques
Priority: Minor
 Attachments: refactor-scripts-v1.patch


 Shell scripts in bin/ need love.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Kafka ecosystem licensing question

2015-01-21 Thread Jonathan Natkins
Hi Eliot,

I can't speak for the Kafka PMC, but as a general rule, if the code is
going to be contributed to the Kafka project itself, it must be
Apache-licensed. What you can do, and what many organizations do is release
code separately via a public Github account, which would allow you to
choose whatever license you prefer (for example:
https://github.com/linkedin/camus or any of the myriad clients
https://cwiki.apache.org/confluence/display/KAFKA/Clients)

However, I think that Apache/BSD/MIT are the safest licenses to use if you
really want people to use your code. AGPL is a particular contentious one,
especially if you want to use the code in a corporate setting, because it
requires that you open-source any code changes you make, and I think it has
some other fairly serious implications in terms of what must be
open-sourced if you include the code in a larger project.

That said, these are primarily my own opinions, and I am a) not a lawyer,
and b) not an Apache committer.

Thanks,
Natty

Jonathan Natty Natkins
StreamSets | Customer Engagement Engineer
mobile: 609.577.1600 | linkedin http://www.linkedin.com/in/nattyice


On Wed, Jan 21, 2015 at 8:48 AM, Weitz, Eliot eliot.we...@viasat.com
wrote:

 Hello,

 I lead a group of developers at our company, ViaSat, who are building a
 set of stream processing services on top of Kafka.  We would very much like
 to open source our work and become part of the Kafka “ecosystem”
 contributing back to the community.

 Our company is fairly new to participating in open source projects and are
 wondering about licensing.  If we used something other than an Apache 2
 license (such as a copyleft license like AGPL), do you think it would it be
 viewed negatively by your developers or others in the Kafka ecosystem and
 become a barrier to contribute to our project?

 I’d appreciate any insights.

 Good work on Kafka!

 Regards,

 Eliot Weitz


[jira] [Commented] (KAFKA-1845) KafkaConfig should use ConfigDef

2015-01-21 Thread Andrii Biletskyi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14285979#comment-14285979
 ] 

Andrii Biletskyi commented on KAFKA-1845:
-

In uploaded patch (KAFKA-1845.patch) all config settings were moved to 
ConfigDef. ConfigDef.define method requires Importance field. This information 
is not provided in current trunk version of KafkaConfig, so I used 
Importance.HIGH everywhere. Please add our comments in review or provide 
setting to importance map.

 KafkaConfig should use ConfigDef 
 -

 Key: KAFKA-1845
 URL: https://issues.apache.org/jira/browse/KAFKA-1845
 Project: Kafka
  Issue Type: Sub-task
Reporter: Gwen Shapira
Assignee: Andrii Biletskyi
  Labels: newbie
 Attachments: KAFKA-1845.patch


 ConfigDef is already used for the new producer and for TopicConfig. 
 Will be nice to standardize and use one configuration and validation library 
 across the board.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-1889) Refactor shell wrapper scripts

2015-01-21 Thread Francois Saint-Jacques (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14286097#comment-14286097
 ] 

Francois Saint-Jacques edited comment on KAFKA-1889 at 1/21/15 7:18 PM:


The second patch should give an overview of what a 'clean' kafka-run-class.sh 
should look like. This will allow packagers to provide easily configurable 
defaults via /etc/default/kafka (on debian-based system) or 
/etc/sysconfig/kafka (on RHEL-based system).


was (Author: fsaintjacques):
The second patch should give an overview of what a 'clean' kafka-run-class.sh 
should look like.

 Refactor shell wrapper scripts
 --

 Key: KAFKA-1889
 URL: https://issues.apache.org/jira/browse/KAFKA-1889
 Project: Kafka
  Issue Type: Improvement
  Components: packaging
Reporter: Francois Saint-Jacques
Assignee: Francois Saint-Jacques
Priority: Minor
 Attachments: refactor-scripts-v1.patch, refactor-scripts-v2.patch


 Shell scripts in bin/ need love.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1782) Junit3 Misusage

2015-01-21 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14286029#comment-14286029
 ] 

Guozhang Wang commented on KAFKA-1782:
--

Jeff,

Sorry for getting late on this.

I would recommend we remove all the references to JUnit3Suite as it is 1) no 
longer the latest version and 2) is confusing to people for expected usage. 
And we will also remove other annotations other than @Test itself but use 
scalatest features instead.

 Junit3 Misusage
 ---

 Key: KAFKA-1782
 URL: https://issues.apache.org/jira/browse/KAFKA-1782
 Project: Kafka
  Issue Type: Bug
Reporter: Guozhang Wang
Assignee: Jeff Holoman
  Labels: newbie
 Fix For: 0.8.3


 This is found while I was working on KAFKA-1580: in many of our cases where 
 we explicitly extend from junit3suite (e.g. ProducerFailureHandlingTest), we 
 are actually misusing a bunch of features that only exist in Junit4, such as 
 (expected=classOf). For example, the following code
 {code}
 import org.scalatest.junit.JUnit3Suite
 import org.junit.Test
 import java.io.IOException
 class MiscTest extends JUnit3Suite {
   @Test (expected = classOf[IOException])
   def testSendOffset() {
   }
 }
 {code}
 will actually pass even though IOException was not thrown since this 
 annotation is not supported in Junit3. Whereas
 {code}
 import org.junit._
 import java.io.IOException
 class MiscTest extends JUnit3Suite {
   @Test (expected = classOf[IOException])
   def testSendOffset() {
   }
 }
 {code}
 or
 {code}
 import org.scalatest.junit.JUnitSuite
 import org.junit._
 import java.io.IOException
 class MiscTest extends JUnit3Suite {
   @Test (expected = classOf[IOException])
   def testSendOffset() {
   }
 }
 {code}
 or
 {code}
 import org.junit._
 import java.io.IOException
 class MiscTest {
   @Test (expected = classOf[IOException])
   def testSendOffset() {
   }
 }
 {code}
 will fail.
 I would propose to not rely on Junit annotations other than @Test itself but 
 use scala unit test annotations instead, for example:
 {code}
 import org.junit._
 import java.io.IOException
 class MiscTest {
   @Test
   def testSendOffset() {
 intercept[IOException] {
   //nothing
 }
   }
 }
 {code}
 will fail with a clearer stacktrace.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-1889) Refactor shell wrapper scripts

2015-01-21 Thread Francois Saint-Jacques (JIRA)
Francois Saint-Jacques created KAFKA-1889:
-

 Summary: Refactor shell wrapper scripts
 Key: KAFKA-1889
 URL: https://issues.apache.org/jira/browse/KAFKA-1889
 Project: Kafka
  Issue Type: Improvement
  Components: packaging
Reporter: Francois Saint-Jacques
Priority: Minor


Shell scripts in bin/ need love.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1804) Kafka network thread lacks top exception handler

2015-01-21 Thread Alexey Ozeritskiy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Ozeritskiy updated KAFKA-1804:
-
Priority: Critical  (was: Major)

 Kafka network thread lacks top exception handler
 

 Key: KAFKA-1804
 URL: https://issues.apache.org/jira/browse/KAFKA-1804
 Project: Kafka
  Issue Type: Bug
Reporter: Oleg Golovin
Priority: Critical

 We have faced the problem that some kafka network threads may fail, so that 
 jstack attached to Kafka process showed fewer threads than we had defined in 
 our Kafka configuration. This leads to API requests processed by this thread 
 getting stuck unresponed.
 There were no error messages in the log regarding thread failure.
 We have examined Kafka code to find out there is no top try-catch block in 
 the network thread code, which could at least log possible errors.
 Could you add top-level try-catch block for the network thread, which should 
 recover network thread in case of exception?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1861) Publishing kafka-client:test in order to utilize the helper utils in TestUtils

2015-01-21 Thread Manikumar Reddy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14285995#comment-14285995
 ] 

Manikumar Reddy commented on KAFKA-1861:


Can we include this simple patch to 0.8.2? So that SAMZA developers can use it.

 Publishing kafka-client:test in order to utilize the helper utils in TestUtils
 --

 Key: KAFKA-1861
 URL: https://issues.apache.org/jira/browse/KAFKA-1861
 Project: Kafka
  Issue Type: Bug
Reporter: Navina Ramesh
Assignee: Manikumar Reddy
 Attachments: KAFKA-1861.patch


 Related to SAMZA-227 (Upgrade KafkaSystemProducer to new java-based Kafka API)
 Turns out that some of the utilities that are helpful in writing unit tests 
 are available in org.apache.kafka.test.TestUtils.java (:kafka-clients). This 
 is not published to maven repository. Hence, we are forced to reproduce the 
 same code in samza. This can be avoided if the test package is published to 
 the Maven repo.
 For example, we are creating a customize MockProducer to be used in Samza 
 unit-tests and access to these quick helper utils will be useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1861) Publishing kafka-client:test in order to utilize the helper utils in TestUtils

2015-01-21 Thread Manikumar Reddy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14285990#comment-14285990
 ] 

Manikumar Reddy commented on KAFKA-1861:


Created reviewboard https://reviews.apache.org/r/30128/diff/
 against branch origin/0.8.2

 Publishing kafka-client:test in order to utilize the helper utils in TestUtils
 --

 Key: KAFKA-1861
 URL: https://issues.apache.org/jira/browse/KAFKA-1861
 Project: Kafka
  Issue Type: Bug
Reporter: Navina Ramesh
 Attachments: KAFKA-1861.patch


 Related to SAMZA-227 (Upgrade KafkaSystemProducer to new java-based Kafka API)
 Turns out that some of the utilities that are helpful in writing unit tests 
 are available in org.apache.kafka.test.TestUtils.java (:kafka-clients). This 
 is not published to maven repository. Hence, we are forced to reproduce the 
 same code in samza. This can be avoided if the test package is published to 
 the Maven repo.
 For example, we are creating a customize MockProducer to be used in Samza 
 unit-tests and access to these quick helper utils will be useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [KIP-DISCUSSION] Mirror Maker Enhancement

2015-01-21 Thread Jiangjie Qin
Hi Gwen,

Please see inline answers. I¹ll update them in the KIP as well.

Thanks.

Jiangjie (Becket) Qin

On 1/20/15, 6:39 PM, Gwen Shapira gshap...@cloudera.com wrote:

Thanks for the detailed document, Jiangjie. Super helpful.

Few questions:

1. You mention that A ConsumerRebalanceListener class is created and
could be wired into ZookeeperConsumerConnector to avoid duplicate
messages when consumer rebalance occurs in mirror maker.

Is this something the user needs to do or configure? or is the wiring
of rebalance listener into the zookeeper consumer will be part of the
enhancement?
In other words, will we need to do anything extra to avoid duplicates
during rebalance in MirrorMaker?
For ZookeeperConsumerConnector in general, users need to wire in listener
by themselves in code.
For Mirror Maker, an internal rebalance listener has been wired in by
default to avoid duplicates on consumer rebalance. User could still
specify a custom listener class in command line argument, the internal
rebalance listener will call that listener after it finishes the default
logic.

2. The only source of truth for offsets in consume-then-send pattern
is end user. - I assume you don't mean an actual person, right? So
what does end user refer to? Can you clarify when will the offset
commit thread commit offsets? And which JIRA implements this?
By end user I mean the target cluster here. The offset commit thread
commit thread periodically. It only commit the offsets that have been
acked.

3. Maintaining message order - In which JIRA do we implement this part?
KAFKA-1650

Again, thanks a lot for documenting this and even more for the
implementation - it is super important for many use cases.

Gwen


Gwen

On Tue, Jan 20, 2015 at 4:31 PM, Jiangjie Qin j...@linkedin.com.invalid
wrote:
 Hi Kafka Devs,

 We are working on Kafka Mirror Maker enhancement. A KIP is posted to
document and discuss on the followings:
 1. KAFKA-1650: No Data loss mirror maker change
 2. KAFKA-1839: To allow partition aware mirror.
 3. KAFKA-1840: To allow message filtering/format conversion
 Feedbacks are welcome. Please let us know if you have any questions or
concerns.

 Thanks.

 Jiangjie (Becket) Qin



[jira] [Commented] (KAFKA-1804) Kafka network thread lacks top exception handler

2015-01-21 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14286162#comment-14286162
 ] 

Sriharsha Chintalapani commented on KAFKA-1804:
---

[~jjkoshy] [~aozeritsky] this looks to be similar in nature to KAFKA-1577.  Do 
you have any steps to reproduce this.

 Kafka network thread lacks top exception handler
 

 Key: KAFKA-1804
 URL: https://issues.apache.org/jira/browse/KAFKA-1804
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.8.2
Reporter: Oleg Golovin
Priority: Critical

 We have faced the problem that some kafka network threads may fail, so that 
 jstack attached to Kafka process showed fewer threads than we had defined in 
 our Kafka configuration. This leads to API requests processed by this thread 
 getting stuck unresponed.
 There were no error messages in the log regarding thread failure.
 We have examined Kafka code to find out there is no top try-catch block in 
 the network thread code, which could at least log possible errors.
 Could you add top-level try-catch block for the network thread, which should 
 recover network thread in case of exception?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1728) update 082 docs

2015-01-21 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14286158#comment-14286158
 ] 

Jun Rao commented on KAFKA-1728:


Thanks for the patch for missing configs . +1 and committed to site.

 update 082 docs
 ---

 Key: KAFKA-1728
 URL: https://issues.apache.org/jira/browse/KAFKA-1728
 Project: Kafka
  Issue Type: Task
Affects Versions: 0.8.2
Reporter: Jun Rao
Priority: Blocker
 Fix For: 0.8.2

 Attachments: default-config-value-0.8.2.patch, 
 missing-config-props-0.8.2.patch


 We need to update the docs for 082 release.
 https://svn.apache.org/repos/asf/kafka/site/082
 http://kafka.apache.org/082/documentation.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 30128: Patch for KAFKA-1861

2015-01-21 Thread Manikumar Reddy O

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30128/
---

Review request for kafka.


Bugs: KAFKA-1861
https://issues.apache.org/jira/browse/KAFKA-1861


Repository: kafka


Description
---

include clients test jar in maven artifacts


Diffs
-

  build.gradle 1cbab29ce83e20dae0561b51eed6fdb86d522f28 

Diff: https://reviews.apache.org/r/30128/diff/


Testing
---


Thanks,

Manikumar Reddy O



Review Request 30126: Patch for KAFKA-1845

2015-01-21 Thread Andrii Biletskyi

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30126/
---

Review request for kafka.


Bugs: KAFKA-1845
https://issues.apache.org/jira/browse/KAFKA-1845


Repository: kafka


Description
---

KAFKA-1845 - Fixed merge conflicts, ported added configs to KafkaConfig


KAFKA-1845 - KafkaConfig to ConfigDef: moved validateValues so it's called on 
instantiating KafkaConfig


KAFKA-1845 - KafkaConfig to ConfigDef: MaxConnectionsPerIpOverrides refactored


Diffs
-

  clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java 
98cb79b701918eca3f6ca9823b6c7b7c97b3ecec 
  core/src/main/scala/kafka/Kafka.scala 
77a49e12af6f869e63230162e9f87a7b0b12b610 
  core/src/main/scala/kafka/controller/KafkaController.scala 
66df6d2fbdbdd556da6bea0df84f93e0472c8fbf 
  core/src/main/scala/kafka/controller/PartitionLeaderSelector.scala 
4a31c7271c2d0a4b9e8b28be729340ecfa0696e5 
  core/src/main/scala/kafka/server/KafkaConfig.scala 
6d74983472249eac808d361344c58cc2858ec971 
  core/src/main/scala/kafka/server/KafkaServer.scala 
89200da30a04943f0b9befe84ab17e62b747c8c4 
  core/src/main/scala/kafka/server/ReplicaFetcherThread.scala 
6879e730282185bda3d6bc3659cb15af0672cecf 
  core/src/test/scala/integration/kafka/api/ProducerCompressionTest.scala 
e63558889272bc76551accdfd554bdafde2e0dd6 
  core/src/test/scala/integration/kafka/api/ProducerFailureHandlingTest.scala 
90c0b7a19c7af8e5416e4bdba62b9824f1abd5ab 
  core/src/test/scala/integration/kafka/api/ProducerSendTest.scala 
b15237b76def3b234924280fa3fdb25dbb0cc0dc 
  core/src/test/scala/unit/kafka/admin/AddPartitionsTest.scala 
1bf2667f47853585bc33ffb3e28256ec5f24ae84 
  core/src/test/scala/unit/kafka/admin/AdminTest.scala 
e28979827110dfbbb92fe5b152e7f1cc973de400 
  core/src/test/scala/unit/kafka/admin/DeleteTopicTest.scala 
33c27678bf8ae8feebcbcdaa4b90a1963157b4a5 
  core/src/test/scala/unit/kafka/consumer/ConsumerIteratorTest.scala 
c0355cc0135c6af2e346b4715659353a31723b86 
  core/src/test/scala/unit/kafka/consumer/ZookeeperConsumerConnectorTest.scala 
a17e8532c44aadf84b8da3a57bcc797a848b5020 
  core/src/test/scala/unit/kafka/integration/AutoOffsetResetTest.scala 
95303e098d40cd790fb370e9b5a47d20860a6da3 
  core/src/test/scala/unit/kafka/integration/FetcherTest.scala 
25845abbcad2e79f56f729e59239b738d3ddbc9d 
  core/src/test/scala/unit/kafka/integration/PrimitiveApiTest.scala 
a5386a03b62956bc440b40783247c8cdf7432315 
  core/src/test/scala/unit/kafka/integration/RollingBounceTest.scala 
eab4b5f619015af42e4554660eafb5208e72ea33 
  core/src/test/scala/unit/kafka/integration/TopicMetadataTest.scala 
35dc071b1056e775326981573c9618d8046e601d 
  core/src/test/scala/unit/kafka/integration/UncleanLeaderElectionTest.scala 
ba3bcdcd1de9843e75e5395dff2fc31b39a5a9d5 
  
core/src/test/scala/unit/kafka/javaapi/consumer/ZookeeperConsumerConnectorTest.scala
 d6248b09bb0f86ee7d3bd0ebce5b99135491453b 
  core/src/test/scala/unit/kafka/log/LogTest.scala 
c2dd8eb69da8c0982a0dd20231c6f8bd58eb623e 
  core/src/test/scala/unit/kafka/log4j/KafkaLog4jAppenderTest.scala 
4ea0489c9fd36983fe190491a086b39413f3a9cd 
  core/src/test/scala/unit/kafka/metrics/MetricsTest.scala 
3cf23b3d6d4460535b90cfb36281714788fc681c 
  core/src/test/scala/unit/kafka/producer/AsyncProducerTest.scala 
1db6ac329f7b54e600802c8a623f80d159d4e69b 
  core/src/test/scala/unit/kafka/producer/ProducerTest.scala 
ce65dab4910d9182e6774f6ef1a7f45561ec0c23 
  core/src/test/scala/unit/kafka/producer/SyncProducerTest.scala 
d60d8e0f49443f4dc8bc2cad6e2f951eda28f5cb 
  core/src/test/scala/unit/kafka/server/AdvertiseBrokerTest.scala 
f0c4a56b61b4f081cf4bee799c6e9c523ff45e19 
  core/src/test/scala/unit/kafka/server/DynamicConfigChangeTest.scala 
ad121169a5e80ebe1d311b95b219841ed69388e2 
  core/src/test/scala/unit/kafka/server/HighwatermarkPersistenceTest.scala 
8913fc1d59f717c6b3ed12c8362080fb5698986b 
  core/src/test/scala/unit/kafka/server/ISRExpirationTest.scala 
a703d2715048c5602635127451593903f8d20576 
  core/src/test/scala/unit/kafka/server/KafkaConfigConfigDefTest.scala 
PRE-CREATION 
  core/src/test/scala/unit/kafka/server/KafkaConfigTest.scala 
82dce80d553957d8b5776a9e140c346d4e07f766 
  core/src/test/scala/unit/kafka/server/LeaderElectionTest.scala 
c2ba07c5fdbaf0e65ca033b2e4d88f45a8a15b2e 
  core/src/test/scala/unit/kafka/server/LogOffsetTest.scala 
c06ee756bf0fe07e5d3c92823a476c960b37afd6 
  core/src/test/scala/unit/kafka/server/LogRecoveryTest.scala 
d5d351c4f25933da0ba776a6a89a989f1ca6a902 
  core/src/test/scala/unit/kafka/server/OffsetCommitTest.scala 
5b93239cdc26b5be7696f4e7863adb9fbe5f0ed5 
  core/src/test/scala/unit/kafka/server/ReplicaFetchTest.scala 
da4bafc1e2a94a436efe395aab1888fc21e55748 
  core/src/test/scala/unit/kafka/server/ReplicaManagerTest.scala 
faa907131ed0aa94a7eacb78c1ffb576062be87a 
  

[jira] [Commented] (KAFKA-1889) Refactor shell wrapper scripts

2015-01-21 Thread Francois Saint-Jacques (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14286006#comment-14286006
 ] 

Francois Saint-Jacques commented on KAFKA-1889:
---

I have multiple other comments on the scripts that I didn't address and might 
be worth talking.

1. There seems to be many way to pass option to kafka-run-class.sh, either by 
arguments (-daemon|-loggc|...) or by environment variables 
(KAFKA_JMX_OPTS|KAFKA_OPTS|KAFKA_HEAP_OPTS|...). This is inconsistent and needs 
to be addressed.
2. Scripts shouldn't bother daemonizing, leave this to packagers, just make 
sure you exec correctly.
3. The defaults are not production ready for servers:
 -gc log shouldn't be enabled by default
 -kafka-request.log to TRACE, this is a silent disk killer on busy cluster
 - never do this in non-init script, should be left to packagers: if [ ! -d 
${LOG_DIR} ]; then mkdir -p ${LOG_DIR}; fi

 Refactor shell wrapper scripts
 --

 Key: KAFKA-1889
 URL: https://issues.apache.org/jira/browse/KAFKA-1889
 Project: Kafka
  Issue Type: Improvement
  Components: packaging
Reporter: Francois Saint-Jacques
Assignee: Francois Saint-Jacques
Priority: Minor
 Attachments: refactor-scripts-v1.patch


 Shell scripts in bin/ need love.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1889) Refactor shell wrapper scripts

2015-01-21 Thread Francois Saint-Jacques (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francois Saint-Jacques updated KAFKA-1889:
--
Assignee: Francois Saint-Jacques
  Status: Patch Available  (was: Open)

 Refactor shell wrapper scripts
 --

 Key: KAFKA-1889
 URL: https://issues.apache.org/jira/browse/KAFKA-1889
 Project: Kafka
  Issue Type: Improvement
  Components: packaging
Reporter: Francois Saint-Jacques
Assignee: Francois Saint-Jacques
Priority: Minor

 Shell scripts in bin/ need love.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1845) KafkaConfig should use ConfigDef

2015-01-21 Thread Andrii Biletskyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrii Biletskyi updated KAFKA-1845:

Status: Patch Available  (was: Open)

 KafkaConfig should use ConfigDef 
 -

 Key: KAFKA-1845
 URL: https://issues.apache.org/jira/browse/KAFKA-1845
 Project: Kafka
  Issue Type: Sub-task
Reporter: Gwen Shapira
Assignee: Andrii Biletskyi
  Labels: newbie
 Attachments: KAFKA-1845.patch


 ConfigDef is already used for the new producer and for TopicConfig. 
 Will be nice to standardize and use one configuration and validation library 
 across the board.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1861) Publishing kafka-client:test in order to utilize the helper utils in TestUtils

2015-01-21 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy updated KAFKA-1861:
---
Assignee: Manikumar Reddy
  Status: Patch Available  (was: Open)

 Publishing kafka-client:test in order to utilize the helper utils in TestUtils
 --

 Key: KAFKA-1861
 URL: https://issues.apache.org/jira/browse/KAFKA-1861
 Project: Kafka
  Issue Type: Bug
Reporter: Navina Ramesh
Assignee: Manikumar Reddy
 Attachments: KAFKA-1861.patch


 Related to SAMZA-227 (Upgrade KafkaSystemProducer to new java-based Kafka API)
 Turns out that some of the utilities that are helpful in writing unit tests 
 are available in org.apache.kafka.test.TestUtils.java (:kafka-clients). This 
 is not published to maven repository. Hence, we are forced to reproduce the 
 same code in samza. This can be avoided if the test package is published to 
 the Maven repo.
 For example, we are creating a customize MockProducer to be used in Samza 
 unit-tests and access to these quick helper utils will be useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1889) Refactor shell wrapper scripts

2015-01-21 Thread Francois Saint-Jacques (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14286097#comment-14286097
 ] 

Francois Saint-Jacques commented on KAFKA-1889:
---

The second patch should give an overview of what a 'clean' kafka-run-class.sh 
should look like.

 Refactor shell wrapper scripts
 --

 Key: KAFKA-1889
 URL: https://issues.apache.org/jira/browse/KAFKA-1889
 Project: Kafka
  Issue Type: Improvement
  Components: packaging
Reporter: Francois Saint-Jacques
Assignee: Francois Saint-Jacques
Priority: Minor
 Attachments: refactor-scripts-v1.patch, refactor-scripts-v2.patch


 Shell scripts in bin/ need love.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] 0.8.2.0 Candidate 2 (with the correct links)

2015-01-21 Thread Jun Rao
That's right. What's in maven staging is always for the latest RC being
voted. The stuff in maven staging will be promoted to maven central once
the vote passes.

Thanks,

Jun

On Wed, Jan 21, 2015 at 10:35 AM, Jason Rosenberg j...@squareup.com wrote:

 For the maven artifacts listed above, there doesn't seem to be any
 distinction between rc1 or rc2, so is it assumed that this is release2
 here?:


 https://repository.apache.org/content/groups/staging/org/apache/kafka/kafka_2.11/0.8.2.0/

 Thanks!

 Jason

 On Wed, Jan 21, 2015 at 8:28 AM, Jun Rao j...@confluent.io wrote:

  This is the second candidate for release of Apache Kafka 0.8.2.0. There
 has
  been some changes since the 0.8.2 beta release, especially in the new
 java
  producer api and jmx mbean names. It would be great if people can test
 this
  out thoroughly.
 
  Release Notes for the 0.8.2.0 release
 
 
 https://people.apache.org/~junrao/kafka-0.8.2.0-candidate2/RELEASE_NOTES.html
 
  *** Please download, test and vote by Monday, Jan 26h, 7pm PT
 
  Kafka's KEYS file containing PGP keys we use to sign the release:
  http://kafka.apache.org/KEYS in addition to the md5, sha1 and sha2
  (SHA256)
  checksum.
 
  * Release artifacts to be voted upon (source and binary):
  https://people.apache.org/~junrao/kafka-0.8.2.0-candidate2/
 
  * Maven artifacts to be voted upon prior to release:
  https://repository.apache.org/content/groups/staging/
 
  * scala-doc
  https://people.apache.org/~junrao/kafka-0.8.2.0-candidate2/scaladoc/
 
  * java-doc
  https://people.apache.org/~junrao/kafka-0.8.2.0-candidate2/javadoc/
 
  * The tag to be voted upon (off the 0.8.2 branch) is the 0.8.2.0 tag
 
 
 https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=058d58adef2ab2787e49d8efeefd61bb3d32f99c
  (commit 0b312a6b9f0833d38eec434bfff4c647c1814564)
 
  /***
 
  Thanks,
 
  Jun
 



[jira] [Updated] (KAFKA-1889) Refactor shell wrapper scripts

2015-01-21 Thread Francois Saint-Jacques (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francois Saint-Jacques updated KAFKA-1889:
--
Attachment: refactor-scripts-v2.patch

 Refactor shell wrapper scripts
 --

 Key: KAFKA-1889
 URL: https://issues.apache.org/jira/browse/KAFKA-1889
 Project: Kafka
  Issue Type: Improvement
  Components: packaging
Reporter: Francois Saint-Jacques
Assignee: Francois Saint-Jacques
Priority: Minor
 Attachments: refactor-scripts-v1.patch, refactor-scripts-v2.patch


 Shell scripts in bin/ need love.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1367) Broker topic metadata not kept in sync with ZooKeeper

2015-01-21 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1367:
-
Fix Version/s: 0.8.3

 Broker topic metadata not kept in sync with ZooKeeper
 -

 Key: KAFKA-1367
 URL: https://issues.apache.org/jira/browse/KAFKA-1367
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.0, 0.8.1
Reporter: Ryan Berdeen
  Labels: newbie++
 Fix For: 0.8.3

 Attachments: KAFKA-1367.txt


 When a broker is restarted, the topic metadata responses from the brokers 
 will be incorrect (different from ZooKeeper) until a preferred replica leader 
 election.
 In the metadata, it looks like leaders are correctly removed from the ISR 
 when a broker disappears, but followers are not. Then, when a broker 
 reappears, the ISR is never updated.
 I used a variation of the Vagrant setup created by Joe Stein to reproduce 
 this with latest from the 0.8.1 branch: 
 https://github.com/also/kafka/commit/dba36a503a5e22ea039df0f9852560b4fb1e067c



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[VOTE] 0.8.2.0 Candidate 2 (with the correct links)

2015-01-21 Thread Jun Rao
This is the second candidate for release of Apache Kafka 0.8.2.0. There has
been some changes since the 0.8.2 beta release, especially in the new java
producer api and jmx mbean names. It would be great if people can test this
out thoroughly.

Release Notes for the 0.8.2.0 release
https://people.apache.org/~junrao/kafka-0.8.2.0-candidate2/RELEASE_NOTES.html

*** Please download, test and vote by Monday, Jan 26h, 7pm PT

Kafka's KEYS file containing PGP keys we use to sign the release:
http://kafka.apache.org/KEYS in addition to the md5, sha1 and sha2 (SHA256)
checksum.

* Release artifacts to be voted upon (source and binary):
https://people.apache.org/~junrao/kafka-0.8.2.0-candidate2/

* Maven artifacts to be voted upon prior to release:
https://repository.apache.org/content/groups/staging/

* scala-doc
https://people.apache.org/~junrao/kafka-0.8.2.0-candidate2/scaladoc/

* java-doc
https://people.apache.org/~junrao/kafka-0.8.2.0-candidate2/javadoc/

* The tag to be voted upon (off the 0.8.2 branch) is the 0.8.2.0 tag
https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=058d58adef2ab2787e49d8efeefd61bb3d32f99c
(commit 0b312a6b9f0833d38eec434bfff4c647c1814564)

/***

Thanks,

Jun


Re: [kafka-clients] [VOTE] 0.8.2.0 Candidate 2

2015-01-21 Thread Manikumar Reddy
Ok, got it.  Link is different from Release Candidate 1.

On Wed, Jan 21, 2015 at 10:01 PM, Jun Rao j...@confluent.io wrote:

 Is it? You just need to navigate into org, then apache, then kafka, etc.

 Thanks,

 Jun

 On Wed, Jan 21, 2015 at 8:28 AM, Manikumar Reddy ku...@nmsworks.co.in
 wrote:

 Also Maven artifacts link is not correct

 On Wed, Jan 21, 2015 at 9:50 PM, Jun Rao j...@confluent.io wrote:

 Yes, will send out a new email with the correct links.

 Thanks,

 Jun

 On Wed, Jan 21, 2015 at 3:12 AM, Manikumar Reddy ku...@nmsworks.co.in
 wrote:

 All links are pointing to
 https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/.
 They should be
 https://people.apache.org/~junrao/kafka-0.8.2.0-candidate2/ right?


 On Tue, Jan 20, 2015 at 8:32 AM, Jun Rao j...@confluent.io wrote:

 This is the second candidate for release of Apache Kafka 0.8.2.0.
 There has been some changes since the 0.8.2 beta release, especially
 in the new java producer api and jmx mbean names. It would be great if
 people can test this out thoroughly.

 Release Notes for the 0.8.2.0 release
 *https://people.apache.org/~junrao/kafka-0.8.2.0-candidate2/RELEASE_NOTES.html
 https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/RELEASE_NOTES.html*

 *** Please download, test and vote by Monday, Jan 26h, 7pm PT

 Kafka's KEYS file containing PGP keys we use to sign the release:
 *http://kafka.apache.org/KEYS http://kafka.apache.org/KEYS* in
 addition to the md5, sha1 and sha2 (SHA256) checksum.

 * Release artifacts to be voted upon (source and binary):
 *https://people.apache.org/~junrao/kafka-0.8.2.0-candidate2/
 https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/*

 * Maven artifacts to be voted upon prior to release:
 *https://repository.apache.org/content/groups/staging/
 https://repository.apache.org/content/groups/staging/*

 * scala-doc
 *https://people.apache.org/~junrao/kafka-0.8.2.0-candidate2/scaladoc/
 https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/scaladoc/#package*

 * java-doc
 *https://people.apache.org/~junrao/kafka-0.8.2.0-candidate2/javadoc/
 https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/javadoc/*

 * The tag to be voted upon (off the 0.8.2 branch) is the 0.8.2.0 tag
 *https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=058d58adef2ab2787e49d8efeefd61bb3d32f99c
 https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=058d58adef2ab2787e49d8efeefd61bb3d32f99c*
  (commit 0b312a6b9f0833d38eec434bfff4c647c1814564)

 /***

 Thanks,

 Jun

 --
 You received this message because you are subscribed to the Google
 Groups kafka-clients group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to kafka-clients+unsubscr...@googlegroups.com.
 To post to this group, send email to kafka-clie...@googlegroups.com.
 Visit this group at http://groups.google.com/group/kafka-clients.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/kafka-clients/CAFc58G-UiYefj%3Dt3C1x85m7q1xjDifTnLSnkujMpP40GHLNwag%40mail.gmail.com
 https://groups.google.com/d/msgid/kafka-clients/CAFc58G-UiYefj%3Dt3C1x85m7q1xjDifTnLSnkujMpP40GHLNwag%40mail.gmail.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.



  --
 You received this message because you are subscribed to the Google
 Groups kafka-clients group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to kafka-clients+unsubscr...@googlegroups.com.
 To post to this group, send email to kafka-clie...@googlegroups.com.
 Visit this group at http://groups.google.com/group/kafka-clients.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/kafka-clients/CAFc58G_U2S4SbbfFPZ913Pr6ThwDBepj9BKCk%3DL6uGVRuzgP2g%40mail.gmail.com
 https://groups.google.com/d/msgid/kafka-clients/CAFc58G_U2S4SbbfFPZ913Pr6ThwDBepj9BKCk%3DL6uGVRuzgP2g%40mail.gmail.com?utm_medium=emailutm_source=footer
 .

 For more options, visit https://groups.google.com/d/optout.



  --
 You received this message because you are subscribed to the Google Groups
 kafka-clients group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to kafka-clients+unsubscr...@googlegroups.com.
 To post to this group, send email to kafka-clie...@googlegroups.com.
 Visit this group at http://groups.google.com/group/kafka-clients.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/kafka-clients/CAFc58G_-5Z45GpqQtzes%2BwowuE%2BCepsC2fS_qkV3D%2B90zKe0vw%40mail.gmail.com
 https://groups.google.com/d/msgid/kafka-clients/CAFc58G_-5Z45GpqQtzes%2BwowuE%2BCepsC2fS_qkV3D%2B90zKe0vw%40mail.gmail.com?utm_medium=emailutm_source=footer
 .

 For more options, visit https://groups.google.com/d/optout.



Re: Review Request 27799: New consumer

2015-01-21 Thread Jay Kreps

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27799/
---

(Updated Jan. 21, 2015, 4:42 p.m.)


Review request for kafka.


Summary (updated)
-

New consumer


Bugs: KAFKA-1760
https://issues.apache.org/jira/browse/KAFKA-1760


Repository: kafka


Description
---

New consumer.


Diffs (updated)
-

  build.gradle c9ac43378c3bf5443f0f47c8ba76067237ecb348 
  clients/src/main/java/org/apache/kafka/clients/ClientRequest.java 
d32c319d8ee4c46dad309ea54b136ea9798e2fd7 
  clients/src/main/java/org/apache/kafka/clients/ClusterConnectionStates.java 
8aece7e81a804b177a6f2c12e2dc6c89c1613262 
  clients/src/main/java/org/apache/kafka/clients/CommonClientConfigs.java 
PRE-CREATION 
  clients/src/main/java/org/apache/kafka/clients/ConnectionState.java 
ab7e3220f9b76b92ef981d695299656f041ad5ed 
  clients/src/main/java/org/apache/kafka/clients/KafkaClient.java 
397695568d3fd8e835d8f923a89b3b00c96d0ead 
  clients/src/main/java/org/apache/kafka/clients/NetworkClient.java 
6746275d0b2596cd6ff7ce464a3a8225ad75ef00 
  clients/src/main/java/org/apache/kafka/clients/RequestCompletionHandler.java 
PRE-CREATION 
  clients/src/main/java/org/apache/kafka/clients/consumer/CommitType.java 
PRE-CREATION 
  clients/src/main/java/org/apache/kafka/clients/consumer/Consumer.java 
c0c636b3e1ba213033db6d23655032c9bbd5e378 
  clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java 
57c1807ccba9f264186f83e91f37c34b959c8060 
  
clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRebalanceCallback.java
 e4cf7d1cfa01c2844b53213a7b539cdcbcbeaa3a 
  clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRecord.java 
16af70a5de52cca786fdea147a6a639b7dc4a311 
  clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRecords.java 
bdf4b26942d5a8c8a9503e05908e9a9eff6228a7 
  clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
76efc216c9e6c3ab084461d792877092a189ad0f 
  clients/src/main/java/org/apache/kafka/clients/consumer/MockConsumer.java 
fa88ac1a8b19b4294f211c4467fe68c7707ddbae 
  
clients/src/main/java/org/apache/kafka/clients/consumer/NoOffsetForPartitionException.java
 PRE-CREATION 
  clients/src/main/java/org/apache/kafka/clients/consumer/OffsetMetadata.java 
ea423ad15eebd262d20d5ec05d592cc115229177 
  
clients/src/main/java/org/apache/kafka/clients/consumer/internals/Heartbeat.java
 PRE-CREATION 
  
clients/src/main/java/org/apache/kafka/clients/consumer/internals/NoOpConsumerRebalanceCallback.java
 PRE-CREATION 
  
clients/src/main/java/org/apache/kafka/clients/consumer/internals/SubscriptionState.java
 PRE-CREATION 
  clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
fc71710dd5997576d3841a1c3b0f7e19a8c9698e 
  clients/src/main/java/org/apache/kafka/clients/producer/MockProducer.java 
904976fadf0610982958628eaee810b60a98d725 
  clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
8b3e565edd1ae04d8d34bd9f1a41e9fa8c880a75 
  
clients/src/main/java/org/apache/kafka/clients/producer/internals/Metadata.java 
dcf46581b912cfb1b5c8d4cbc293d2d1444b7740 
  
clients/src/main/java/org/apache/kafka/clients/producer/internals/Partitioner.java
 483899d2e69b33655d0e08949f5f64af2519660a 
  clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java 
ccc03d8447ebba40131a70e16969686ac4aab58a 
  clients/src/main/java/org/apache/kafka/common/Cluster.java 
d3299b944062d96852452de455902659ad8af757 
  clients/src/main/java/org/apache/kafka/common/PartitionInfo.java 
b15aa2c3ef2d7c4b24618ff42fd4da324237a813 
  clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java 
98cb79b701918eca3f6ca9823b6c7b7c97b3ecec 
  clients/src/main/java/org/apache/kafka/common/errors/ApiException.java 
7c948b166a8ac07616809f260754116ae7764973 
  clients/src/main/java/org/apache/kafka/common/network/Selectable.java 
b68bbf00ab8eba6c5867d346c91188142593ca6e 
  clients/src/main/java/org/apache/kafka/common/network/Selector.java 
74d695ba39de44b6a3d15340ec0114bc4fce2ba2 
  clients/src/main/java/org/apache/kafka/common/protocol/Errors.java 
3316b6a1098311b8603a4a5893bf57b75d2e43cb 
  clients/src/main/java/org/apache/kafka/common/protocol/types/Struct.java 
121e880a941fcd3e6392859edba11a94236494cc 
  clients/src/main/java/org/apache/kafka/common/record/LogEntry.java 
e4d688cbe0c61b74ea15fc8dd3b634f9e5ee9b83 
  clients/src/main/java/org/apache/kafka/common/record/MemoryRecords.java 
040e5b91005edb8f015afdfa76fd94e0bf3cb4ca 
  
clients/src/main/java/org/apache/kafka/common/requests/ConsumerMetadataRequest.java
 99b52c23d639df010bf2affc0f79d1c6e16ed67c 
  
clients/src/main/java/org/apache/kafka/common/requests/ConsumerMetadataResponse.java
 8b8f591c4b2802a9cbbe34746c0b3ca4a64a8681 
  clients/src/main/java/org/apache/kafka/common/requests/FetchRequest.java 

Re: [kafka-clients] [VOTE] 0.8.2.0 Candidate 2

2015-01-21 Thread Jun Rao
Is it? You just need to navigate into org, then apache, then kafka, etc.

Thanks,

Jun

On Wed, Jan 21, 2015 at 8:28 AM, Manikumar Reddy ku...@nmsworks.co.in
wrote:

 Also Maven artifacts link is not correct

 On Wed, Jan 21, 2015 at 9:50 PM, Jun Rao j...@confluent.io wrote:

 Yes, will send out a new email with the correct links.

 Thanks,

 Jun

 On Wed, Jan 21, 2015 at 3:12 AM, Manikumar Reddy ku...@nmsworks.co.in
 wrote:

 All links are pointing to
 https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/.
 They should be
 https://people.apache.org/~junrao/kafka-0.8.2.0-candidate2/ right?


 On Tue, Jan 20, 2015 at 8:32 AM, Jun Rao j...@confluent.io wrote:

 This is the second candidate for release of Apache Kafka 0.8.2.0.
 There has been some changes since the 0.8.2 beta release, especially
 in the new java producer api and jmx mbean names. It would be great if
 people can test this out thoroughly.

 Release Notes for the 0.8.2.0 release
 *https://people.apache.org/~junrao/kafka-0.8.2.0-candidate2/RELEASE_NOTES.html
 https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/RELEASE_NOTES.html*

 *** Please download, test and vote by Monday, Jan 26h, 7pm PT

 Kafka's KEYS file containing PGP keys we use to sign the release:
 *http://kafka.apache.org/KEYS http://kafka.apache.org/KEYS* in
 addition to the md5, sha1 and sha2 (SHA256) checksum.

 * Release artifacts to be voted upon (source and binary):
 *https://people.apache.org/~junrao/kafka-0.8.2.0-candidate2/
 https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/*

 * Maven artifacts to be voted upon prior to release:
 *https://repository.apache.org/content/groups/staging/
 https://repository.apache.org/content/groups/staging/*

 * scala-doc
 *https://people.apache.org/~junrao/kafka-0.8.2.0-candidate2/scaladoc/
 https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/scaladoc/#package*

 * java-doc
 *https://people.apache.org/~junrao/kafka-0.8.2.0-candidate2/javadoc/
 https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/javadoc/*

 * The tag to be voted upon (off the 0.8.2 branch) is the 0.8.2.0 tag
 *https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=058d58adef2ab2787e49d8efeefd61bb3d32f99c
 https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=058d58adef2ab2787e49d8efeefd61bb3d32f99c*
  (commit 0b312a6b9f0833d38eec434bfff4c647c1814564)

 /***

 Thanks,

 Jun

 --
 You received this message because you are subscribed to the Google
 Groups kafka-clients group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to kafka-clients+unsubscr...@googlegroups.com.
 To post to this group, send email to kafka-clie...@googlegroups.com.
 Visit this group at http://groups.google.com/group/kafka-clients.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/kafka-clients/CAFc58G-UiYefj%3Dt3C1x85m7q1xjDifTnLSnkujMpP40GHLNwag%40mail.gmail.com
 https://groups.google.com/d/msgid/kafka-clients/CAFc58G-UiYefj%3Dt3C1x85m7q1xjDifTnLSnkujMpP40GHLNwag%40mail.gmail.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.



  --
 You received this message because you are subscribed to the Google Groups
 kafka-clients group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to kafka-clients+unsubscr...@googlegroups.com.
 To post to this group, send email to kafka-clie...@googlegroups.com.
 Visit this group at http://groups.google.com/group/kafka-clients.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/kafka-clients/CAFc58G_U2S4SbbfFPZ913Pr6ThwDBepj9BKCk%3DL6uGVRuzgP2g%40mail.gmail.com
 https://groups.google.com/d/msgid/kafka-clients/CAFc58G_U2S4SbbfFPZ913Pr6ThwDBepj9BKCk%3DL6uGVRuzgP2g%40mail.gmail.com?utm_medium=emailutm_source=footer
 .

 For more options, visit https://groups.google.com/d/optout.





Re: Review Request 27799: Patch for KAFKA-1760

2015-01-21 Thread Jay Kreps


 On Jan. 13, 2015, 10:32 p.m., Onur Karaman wrote:
  clients/src/main/java/org/apache/kafka/clients/ClientRequest.java, lines 
  34-37
  https://reviews.apache.org/r/27799/diff/2/?file=816201#file816201line34
 
  It looks like you'd want to replace the attachment docs with new 
  callback docs.

Good catch.


- Jay


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27799/#review67959
---


On Jan. 19, 2015, 3:10 a.m., Jay Kreps wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/27799/
 ---
 
 (Updated Jan. 19, 2015, 3:10 a.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1760
 https://issues.apache.org/jira/browse/KAFKA-1760
 
 
 Repository: kafka
 
 
 Description
 ---
 
 New consumer.
 
 
 Diffs
 -
 
   build.gradle c9ac43378c3bf5443f0f47c8ba76067237ecb348 
   clients/src/main/java/org/apache/kafka/clients/ClientRequest.java 
 d32c319d8ee4c46dad309ea54b136ea9798e2fd7 
   clients/src/main/java/org/apache/kafka/clients/ClusterConnectionStates.java 
 8aece7e81a804b177a6f2c12e2dc6c89c1613262 
   clients/src/main/java/org/apache/kafka/clients/CommonClientConfigs.java 
 PRE-CREATION 
   clients/src/main/java/org/apache/kafka/clients/ConnectionState.java 
 ab7e3220f9b76b92ef981d695299656f041ad5ed 
   clients/src/main/java/org/apache/kafka/clients/KafkaClient.java 
 397695568d3fd8e835d8f923a89b3b00c96d0ead 
   clients/src/main/java/org/apache/kafka/clients/NetworkClient.java 
 6746275d0b2596cd6ff7ce464a3a8225ad75ef00 
   
 clients/src/main/java/org/apache/kafka/clients/RequestCompletionHandler.java 
 PRE-CREATION 
   clients/src/main/java/org/apache/kafka/clients/consumer/CommitType.java 
 PRE-CREATION 
   clients/src/main/java/org/apache/kafka/clients/consumer/Consumer.java 
 c0c636b3e1ba213033db6d23655032c9bbd5e378 
   clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java 
 57c1807ccba9f264186f83e91f37c34b959c8060 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRebalanceCallback.java
  e4cf7d1cfa01c2844b53213a7b539cdcbcbeaa3a 
   clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRecord.java 
 16af70a5de52cca786fdea147a6a639b7dc4a311 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRecords.java 
 bdf4b26942d5a8c8a9503e05908e9a9eff6228a7 
   clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
 76efc216c9e6c3ab084461d792877092a189ad0f 
   clients/src/main/java/org/apache/kafka/clients/consumer/MockConsumer.java 
 fa88ac1a8b19b4294f211c4467fe68c7707ddbae 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/NoOffsetForPartitionException.java
  PRE-CREATION 
   clients/src/main/java/org/apache/kafka/clients/consumer/OffsetMetadata.java 
 ea423ad15eebd262d20d5ec05d592cc115229177 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/internals/Heartbeat.java
  PRE-CREATION 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/internals/NoOpConsumerRebalanceCallback.java
  PRE-CREATION 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/internals/SubscriptionState.java
  PRE-CREATION 
   clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
 fc71710dd5997576d3841a1c3b0f7e19a8c9698e 
   clients/src/main/java/org/apache/kafka/clients/producer/MockProducer.java 
 904976fadf0610982958628eaee810b60a98d725 
   clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
 8b3e565edd1ae04d8d34bd9f1a41e9fa8c880a75 
   
 clients/src/main/java/org/apache/kafka/clients/producer/internals/Metadata.java
  dcf46581b912cfb1b5c8d4cbc293d2d1444b7740 
   
 clients/src/main/java/org/apache/kafka/clients/producer/internals/Partitioner.java
  483899d2e69b33655d0e08949f5f64af2519660a 
   
 clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java 
 ccc03d8447ebba40131a70e16969686ac4aab58a 
   clients/src/main/java/org/apache/kafka/common/Cluster.java 
 d3299b944062d96852452de455902659ad8af757 
   clients/src/main/java/org/apache/kafka/common/PartitionInfo.java 
 b15aa2c3ef2d7c4b24618ff42fd4da324237a813 
   clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java 
 98cb79b701918eca3f6ca9823b6c7b7c97b3ecec 
   clients/src/main/java/org/apache/kafka/common/errors/ApiException.java 
 7c948b166a8ac07616809f260754116ae7764973 
   clients/src/main/java/org/apache/kafka/common/network/Selectable.java 
 b68bbf00ab8eba6c5867d346c91188142593ca6e 
   clients/src/main/java/org/apache/kafka/common/network/Selector.java 
 74d695ba39de44b6a3d15340ec0114bc4fce2ba2 
   clients/src/main/java/org/apache/kafka/common/protocol/Errors.java 
 3316b6a1098311b8603a4a5893bf57b75d2e43cb 
   

Re: Review Request 27799: New consumer

2015-01-21 Thread Jay Kreps

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27799/
---

(Updated Jan. 21, 2015, 4:47 p.m.)


Review request for kafka.


Bugs: KAFKA-1760
https://issues.apache.org/jira/browse/KAFKA-1760


Repository: kafka


Description (updated)
---

New consumer.

Addressed the first round of comments.


Diffs
-

  build.gradle c9ac43378c3bf5443f0f47c8ba76067237ecb348 
  clients/src/main/java/org/apache/kafka/clients/ClientRequest.java 
d32c319d8ee4c46dad309ea54b136ea9798e2fd7 
  clients/src/main/java/org/apache/kafka/clients/ClusterConnectionStates.java 
8aece7e81a804b177a6f2c12e2dc6c89c1613262 
  clients/src/main/java/org/apache/kafka/clients/CommonClientConfigs.java 
PRE-CREATION 
  clients/src/main/java/org/apache/kafka/clients/ConnectionState.java 
ab7e3220f9b76b92ef981d695299656f041ad5ed 
  clients/src/main/java/org/apache/kafka/clients/KafkaClient.java 
397695568d3fd8e835d8f923a89b3b00c96d0ead 
  clients/src/main/java/org/apache/kafka/clients/NetworkClient.java 
6746275d0b2596cd6ff7ce464a3a8225ad75ef00 
  clients/src/main/java/org/apache/kafka/clients/RequestCompletionHandler.java 
PRE-CREATION 
  clients/src/main/java/org/apache/kafka/clients/consumer/CommitType.java 
PRE-CREATION 
  clients/src/main/java/org/apache/kafka/clients/consumer/Consumer.java 
c0c636b3e1ba213033db6d23655032c9bbd5e378 
  clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java 
57c1807ccba9f264186f83e91f37c34b959c8060 
  
clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRebalanceCallback.java
 e4cf7d1cfa01c2844b53213a7b539cdcbcbeaa3a 
  clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRecord.java 
16af70a5de52cca786fdea147a6a639b7dc4a311 
  clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRecords.java 
bdf4b26942d5a8c8a9503e05908e9a9eff6228a7 
  clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
76efc216c9e6c3ab084461d792877092a189ad0f 
  clients/src/main/java/org/apache/kafka/clients/consumer/MockConsumer.java 
fa88ac1a8b19b4294f211c4467fe68c7707ddbae 
  
clients/src/main/java/org/apache/kafka/clients/consumer/NoOffsetForPartitionException.java
 PRE-CREATION 
  clients/src/main/java/org/apache/kafka/clients/consumer/OffsetMetadata.java 
ea423ad15eebd262d20d5ec05d592cc115229177 
  
clients/src/main/java/org/apache/kafka/clients/consumer/internals/Heartbeat.java
 PRE-CREATION 
  
clients/src/main/java/org/apache/kafka/clients/consumer/internals/NoOpConsumerRebalanceCallback.java
 PRE-CREATION 
  
clients/src/main/java/org/apache/kafka/clients/consumer/internals/SubscriptionState.java
 PRE-CREATION 
  clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
fc71710dd5997576d3841a1c3b0f7e19a8c9698e 
  clients/src/main/java/org/apache/kafka/clients/producer/MockProducer.java 
904976fadf0610982958628eaee810b60a98d725 
  clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
8b3e565edd1ae04d8d34bd9f1a41e9fa8c880a75 
  
clients/src/main/java/org/apache/kafka/clients/producer/internals/Metadata.java 
dcf46581b912cfb1b5c8d4cbc293d2d1444b7740 
  
clients/src/main/java/org/apache/kafka/clients/producer/internals/Partitioner.java
 483899d2e69b33655d0e08949f5f64af2519660a 
  clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java 
ccc03d8447ebba40131a70e16969686ac4aab58a 
  clients/src/main/java/org/apache/kafka/common/Cluster.java 
d3299b944062d96852452de455902659ad8af757 
  clients/src/main/java/org/apache/kafka/common/PartitionInfo.java 
b15aa2c3ef2d7c4b24618ff42fd4da324237a813 
  clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java 
98cb79b701918eca3f6ca9823b6c7b7c97b3ecec 
  clients/src/main/java/org/apache/kafka/common/errors/ApiException.java 
7c948b166a8ac07616809f260754116ae7764973 
  clients/src/main/java/org/apache/kafka/common/network/Selectable.java 
b68bbf00ab8eba6c5867d346c91188142593ca6e 
  clients/src/main/java/org/apache/kafka/common/network/Selector.java 
74d695ba39de44b6a3d15340ec0114bc4fce2ba2 
  clients/src/main/java/org/apache/kafka/common/protocol/Errors.java 
3316b6a1098311b8603a4a5893bf57b75d2e43cb 
  clients/src/main/java/org/apache/kafka/common/protocol/types/Struct.java 
121e880a941fcd3e6392859edba11a94236494cc 
  clients/src/main/java/org/apache/kafka/common/record/LogEntry.java 
e4d688cbe0c61b74ea15fc8dd3b634f9e5ee9b83 
  clients/src/main/java/org/apache/kafka/common/record/MemoryRecords.java 
040e5b91005edb8f015afdfa76fd94e0bf3cb4ca 
  
clients/src/main/java/org/apache/kafka/common/requests/ConsumerMetadataRequest.java
 99b52c23d639df010bf2affc0f79d1c6e16ed67c 
  
clients/src/main/java/org/apache/kafka/common/requests/ConsumerMetadataResponse.java
 8b8f591c4b2802a9cbbe34746c0b3ca4a64a8681 
  clients/src/main/java/org/apache/kafka/common/requests/FetchRequest.java 

Kafka ecosystem licensing question

2015-01-21 Thread Weitz, Eliot
Hello,

I lead a group of developers at our company, ViaSat, who are building a set of 
stream processing services on top of Kafka.  We would very much like to open 
source our work and become part of the Kafka “ecosystem” contributing back to 
the community.

Our company is fairly new to participating in open source projects and are 
wondering about licensing.  If we used something other than an Apache 2 license 
(such as a copyleft license like AGPL), do you think it would it be viewed 
negatively by your developers or others in the Kafka ecosystem and become a 
barrier to contribute to our project?

I’d appreciate any insights.

Good work on Kafka!

Regards,

Eliot Weitz

Re: [kafka-clients] [VOTE] 0.8.2.0 Candidate 2

2015-01-21 Thread Manikumar Reddy
Also Maven artifacts link is not correct

On Wed, Jan 21, 2015 at 9:50 PM, Jun Rao j...@confluent.io wrote:

 Yes, will send out a new email with the correct links.

 Thanks,

 Jun

 On Wed, Jan 21, 2015 at 3:12 AM, Manikumar Reddy ku...@nmsworks.co.in
 wrote:

 All links are pointing to
 https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/.
 They should be
 https://people.apache.org/~junrao/kafka-0.8.2.0-candidate2/ right?


 On Tue, Jan 20, 2015 at 8:32 AM, Jun Rao j...@confluent.io wrote:

 This is the second candidate for release of Apache Kafka 0.8.2.0. There
 has been some changes since the 0.8.2 beta release, especially in the
 new java producer api and jmx mbean names. It would be great if people can
 test this out thoroughly.

 Release Notes for the 0.8.2.0 release
 *https://people.apache.org/~junrao/kafka-0.8.2.0-candidate2/RELEASE_NOTES.html
 https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/RELEASE_NOTES.html*

 *** Please download, test and vote by Monday, Jan 26h, 7pm PT

 Kafka's KEYS file containing PGP keys we use to sign the release:
 *http://kafka.apache.org/KEYS http://kafka.apache.org/KEYS* in
 addition to the md5, sha1 and sha2 (SHA256) checksum.

 * Release artifacts to be voted upon (source and binary):
 *https://people.apache.org/~junrao/kafka-0.8.2.0-candidate2/
 https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/*

 * Maven artifacts to be voted upon prior to release:
 *https://repository.apache.org/content/groups/staging/
 https://repository.apache.org/content/groups/staging/*

 * scala-doc
 *https://people.apache.org/~junrao/kafka-0.8.2.0-candidate2/scaladoc/
 https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/scaladoc/#package*

 * java-doc
 *https://people.apache.org/~junrao/kafka-0.8.2.0-candidate2/javadoc/
 https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/javadoc/*

 * The tag to be voted upon (off the 0.8.2 branch) is the 0.8.2.0 tag
 *https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=058d58adef2ab2787e49d8efeefd61bb3d32f99c
 https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=058d58adef2ab2787e49d8efeefd61bb3d32f99c*
  (commit 0b312a6b9f0833d38eec434bfff4c647c1814564)

 /***

 Thanks,

 Jun

 --
 You received this message because you are subscribed to the Google
 Groups kafka-clients group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to kafka-clients+unsubscr...@googlegroups.com.
 To post to this group, send email to kafka-clie...@googlegroups.com.
 Visit this group at http://groups.google.com/group/kafka-clients.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/kafka-clients/CAFc58G-UiYefj%3Dt3C1x85m7q1xjDifTnLSnkujMpP40GHLNwag%40mail.gmail.com
 https://groups.google.com/d/msgid/kafka-clients/CAFc58G-UiYefj%3Dt3C1x85m7q1xjDifTnLSnkujMpP40GHLNwag%40mail.gmail.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.



  --
 You received this message because you are subscribed to the Google Groups
 kafka-clients group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to kafka-clients+unsubscr...@googlegroups.com.
 To post to this group, send email to kafka-clie...@googlegroups.com.
 Visit this group at http://groups.google.com/group/kafka-clients.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/kafka-clients/CAFc58G_U2S4SbbfFPZ913Pr6ThwDBepj9BKCk%3DL6uGVRuzgP2g%40mail.gmail.com
 https://groups.google.com/d/msgid/kafka-clients/CAFc58G_U2S4SbbfFPZ913Pr6ThwDBepj9BKCk%3DL6uGVRuzgP2g%40mail.gmail.com?utm_medium=emailutm_source=footer
 .

 For more options, visit https://groups.google.com/d/optout.



[jira] [Issue Comment Deleted] (KAFKA-1664) Kafka does not properly parse multiple ZK nodes with non-root chroot

2015-01-21 Thread Ashish Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Kumar Singh updated KAFKA-1664:
--
Comment: was deleted

(was: Testing file 
[KAFKA-1664.2.patch|https://issues.apache.org/jira/secure/attachment/12686456/KAFKA-1664.2.patch]
 against branch trunk took 0:10:03.658955.

{color:red}Overall:{color} -1 due to 11 errors

{color:red}ERROR:{color} Some unit tests failed (report)
{color:red}ERROR:{color} Failed unit test: 
{{kafka.server.ServerGenerateBrokerIdTest  
testConsistentBrokerIdFromUserConfigAndMetaProps FAILED
}}
{color:red}ERROR:{color} Failed unit test: 
{{kafka.server.ServerGenerateBrokerIdTest  testMultipleLogDirsMetaProps FAILED
}}
{color:red}ERROR:{color} Failed unit test: {{kafka.server.ServerShutdownTest  
testCleanShutdownWithDeleteTopicEnabled FAILED
}}
{color:red}ERROR:{color} Failed unit test: 
{{kafka.server.ServerGenerateBrokerIdTest  testAutoGenerateBrokerId FAILED
}}
{color:red}ERROR:{color} Failed unit test: {{kafka.network.SocketServerTest  
testMaxConnectionsPerIp FAILED
}}
{color:red}ERROR:{color} Failed unit test: {{kafka.server.ServerShutdownTest  
testCleanShutdown FAILED
}}
{color:red}ERROR:{color} Failed unit test: {{kafka.network.SocketServerTest  
testMaxConnectionsPerIPOverrides FAILED
}}
{color:red}ERROR:{color} Failed unit test: 
{{kafka.server.ServerGenerateBrokerIdTest  testUserConfigAndGeneratedBrokerId 
FAILED
}}
{color:red}ERROR:{color} Failed unit test: {{kafka.server.ServerShutdownTest  
testCleanShutdownAfterFailedStartup FAILED
}}
{color:red}ERROR:{color} Failed unit test: 
{{kafka.consumer.ZookeeperConsumerConnectorTest  testConsumerRebalanceListener 
FAILED
}}
{color:green}SUCCESS:{color} Gradle bootstrap was successful
{color:green}SUCCESS:{color} Clean was successful
{color:green}SUCCESS:{color} Patch applied correctly
{color:green}SUCCESS:{color} Patch add/modify test case
{color:green}SUCCESS:{color} Gradle bootstrap was successful
{color:green}SUCCESS:{color} Patch compiled

This message is automatically generated.)

 Kafka does not properly parse multiple ZK nodes with non-root chroot
 

 Key: KAFKA-1664
 URL: https://issues.apache.org/jira/browse/KAFKA-1664
 Project: Kafka
  Issue Type: Bug
  Components: clients
Reporter: Ricky Saltzer
Assignee: Ashish Kumar Singh
Priority: Minor
  Labels: newbie
 Attachments: KAFKA-1664.1.patch, KAFKA-1664.2.patch, KAFKA-1664.patch


 When using a non-root ZK directory for Kafka, if you specify multiple ZK 
 servers, Kafka does not seem to properly parse the connection string. 
 *Error*
 {code}
 [root@hodor-001 bin]# ./kafka-console-consumer.sh --zookeeper 
 baelish-001.edh.cloudera.com:2181/kafka,baelish-002.edh.cloudera.com:2181/kafka,baelish-003.edh.cloudera.com:2181/kafka
  --topic test-topic
 [2014-10-01 15:31:04,629] ERROR Error processing message, stopping consumer:  
 (kafka.consumer.ConsoleConsumer$)
 java.lang.IllegalArgumentException: Path length must be  0
   at org.apache.zookeeper.common.PathUtils.validatePath(PathUtils.java:48)
   at org.apache.zookeeper.common.PathUtils.validatePath(PathUtils.java:35)
   at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:766)
   at org.I0Itec.zkclient.ZkConnection.create(ZkConnection.java:87)
   at org.I0Itec.zkclient.ZkClient$1.call(ZkClient.java:308)
   at org.I0Itec.zkclient.ZkClient$1.call(ZkClient.java:304)
   at org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:675)
   at org.I0Itec.zkclient.ZkClient.create(ZkClient.java:304)
   at org.I0Itec.zkclient.ZkClient.createPersistent(ZkClient.java:213)
   at org.I0Itec.zkclient.ZkClient.createPersistent(ZkClient.java:223)
   at org.I0Itec.zkclient.ZkClient.createPersistent(ZkClient.java:223)
   at org.I0Itec.zkclient.ZkClient.createPersistent(ZkClient.java:223)
   at kafka.utils.ZkUtils$.createParentPath(ZkUtils.scala:245)
   at kafka.utils.ZkUtils$.createEphemeralPath(ZkUtils.scala:256)
   at 
 kafka.utils.ZkUtils$.createEphemeralPathExpectConflict(ZkUtils.scala:268)
   at 
 kafka.utils.ZkUtils$.createEphemeralPathExpectConflictHandleZKBug(ZkUtils.scala:306)
   at 
 kafka.consumer.ZookeeperConsumerConnector.kafka$consumer$ZookeeperConsumerConnector$$registerConsumerInZK(ZookeeperConsumerConnector.scala:226)
   at 
 kafka.consumer.ZookeeperConsumerConnector$WildcardStreamsHandler.init(ZookeeperConsumerConnector.scala:755)
   at 
 kafka.consumer.ZookeeperConsumerConnector.createMessageStreamsByFilter(ZookeeperConsumerConnector.scala:145)
   at kafka.consumer.ConsoleConsumer$.main(ConsoleConsumer.scala:196)
   at kafka.consumer.ConsoleConsumer.main(ConsoleConsumer.scala)
 {code}
 *Working*
 {code}
 [root@hodor-001 bin]# 

Re: [kafka-clients] [VOTE] 0.8.2.0 Candidate 2

2015-01-21 Thread Jun Rao
Yes, will send out a new email with the correct links.

Thanks,

Jun

On Wed, Jan 21, 2015 at 3:12 AM, Manikumar Reddy ku...@nmsworks.co.in
wrote:

 All links are pointing to
 https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/.
 They should be https://people.apache.org/~junrao/kafka-0.8.2.0-candidate2/
 right?


 On Tue, Jan 20, 2015 at 8:32 AM, Jun Rao j...@confluent.io wrote:

 This is the second candidate for release of Apache Kafka 0.8.2.0. There
 has been some changes since the 0.8.2 beta release, especially in the
 new java producer api and jmx mbean names. It would be great if people can
 test this out thoroughly.

 Release Notes for the 0.8.2.0 release
 *https://people.apache.org/~junrao/kafka-0.8.2.0-candidate2/RELEASE_NOTES.html
 https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/RELEASE_NOTES.html*

 *** Please download, test and vote by Monday, Jan 26h, 7pm PT

 Kafka's KEYS file containing PGP keys we use to sign the release:
 *http://kafka.apache.org/KEYS http://kafka.apache.org/KEYS* in
 addition to the md5, sha1 and sha2 (SHA256) checksum.

 * Release artifacts to be voted upon (source and binary):
 *https://people.apache.org/~junrao/kafka-0.8.2.0-candidate2/
 https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/*

 * Maven artifacts to be voted upon prior to release:
 *https://repository.apache.org/content/groups/staging/
 https://repository.apache.org/content/groups/staging/*

 * scala-doc
 *https://people.apache.org/~junrao/kafka-0.8.2.0-candidate2/scaladoc/
 https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/scaladoc/#package*

 * java-doc
 *https://people.apache.org/~junrao/kafka-0.8.2.0-candidate2/javadoc/
 https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/javadoc/*

 * The tag to be voted upon (off the 0.8.2 branch) is the 0.8.2.0 tag
 *https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=058d58adef2ab2787e49d8efeefd61bb3d32f99c
 https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=058d58adef2ab2787e49d8efeefd61bb3d32f99c*
  (commit 0b312a6b9f0833d38eec434bfff4c647c1814564)

 /***

 Thanks,

 Jun

 --
 You received this message because you are subscribed to the Google Groups
 kafka-clients group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to kafka-clients+unsubscr...@googlegroups.com.
 To post to this group, send email to kafka-clie...@googlegroups.com.
 Visit this group at http://groups.google.com/group/kafka-clients.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/kafka-clients/CAFc58G-UiYefj%3Dt3C1x85m7q1xjDifTnLSnkujMpP40GHLNwag%40mail.gmail.com
 https://groups.google.com/d/msgid/kafka-clients/CAFc58G-UiYefj%3Dt3C1x85m7q1xjDifTnLSnkujMpP40GHLNwag%40mail.gmail.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.





[jira] [Updated] (KAFKA-1760) Implement new consumer client

2015-01-21 Thread Jay Kreps (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jay Kreps updated KAFKA-1760:
-
Attachment: KAFKA-1760_2015-01-21_08:42:20.patch

 Implement new consumer client
 -

 Key: KAFKA-1760
 URL: https://issues.apache.org/jira/browse/KAFKA-1760
 Project: Kafka
  Issue Type: Sub-task
  Components: consumer
Reporter: Jay Kreps
Assignee: Jay Kreps
 Fix For: 0.8.3

 Attachments: KAFKA-1760.patch, KAFKA-1760_2015-01-11_16:57:15.patch, 
 KAFKA-1760_2015-01-18_19:10:13.patch, KAFKA-1760_2015-01-21_08:42:20.patch


 Implement a consumer client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1760) Implement new consumer client

2015-01-21 Thread Jay Kreps (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14285851#comment-14285851
 ] 

Jay Kreps commented on KAFKA-1760:
--

Updated reviewboard https://reviews.apache.org/r/27799/diff/
 against branch trunk

 Implement new consumer client
 -

 Key: KAFKA-1760
 URL: https://issues.apache.org/jira/browse/KAFKA-1760
 Project: Kafka
  Issue Type: Sub-task
  Components: consumer
Reporter: Jay Kreps
Assignee: Jay Kreps
 Fix For: 0.8.3

 Attachments: KAFKA-1760.patch, KAFKA-1760_2015-01-11_16:57:15.patch, 
 KAFKA-1760_2015-01-18_19:10:13.patch, KAFKA-1760_2015-01-21_08:42:20.patch


 Implement a consumer client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [KIP-DISCUSSION] Mirror Maker Enhancement

2015-01-21 Thread Gwen Shapira
Thanks for the answers. Much clearer now :)

Unrelated question: How do you test MirrorMaker (especially around data loss)?
I didn't see any unit-tests or integration tests in trunk.

Gwen

On Wed, Jan 21, 2015 at 9:55 AM, Jiangjie Qin j...@linkedin.com.invalid wrote:
 Hi Gwen,

 Please see inline answers. I¹ll update them in the KIP as well.

 Thanks.

 Jiangjie (Becket) Qin

 On 1/20/15, 6:39 PM, Gwen Shapira gshap...@cloudera.com wrote:

Thanks for the detailed document, Jiangjie. Super helpful.

Few questions:

1. You mention that A ConsumerRebalanceListener class is created and
could be wired into ZookeeperConsumerConnector to avoid duplicate
messages when consumer rebalance occurs in mirror maker.

Is this something the user needs to do or configure? or is the wiring
of rebalance listener into the zookeeper consumer will be part of the
enhancement?
In other words, will we need to do anything extra to avoid duplicates
during rebalance in MirrorMaker?
 For ZookeeperConsumerConnector in general, users need to wire in listener
 by themselves in code.
 For Mirror Maker, an internal rebalance listener has been wired in by
 default to avoid duplicates on consumer rebalance. User could still
 specify a custom listener class in command line argument, the internal
 rebalance listener will call that listener after it finishes the default
 logic.

2. The only source of truth for offsets in consume-then-send pattern
is end user. - I assume you don't mean an actual person, right? So
what does end user refer to? Can you clarify when will the offset
commit thread commit offsets? And which JIRA implements this?
 By end user I mean the target cluster here. The offset commit thread
 commit thread periodically. It only commit the offsets that have been
 acked.

3. Maintaining message order - In which JIRA do we implement this part?
 KAFKA-1650

Again, thanks a lot for documenting this and even more for the
implementation - it is super important for many use cases.

Gwen


Gwen

On Tue, Jan 20, 2015 at 4:31 PM, Jiangjie Qin j...@linkedin.com.invalid
wrote:
 Hi Kafka Devs,

 We are working on Kafka Mirror Maker enhancement. A KIP is posted to
document and discuss on the followings:
 1. KAFKA-1650: No Data loss mirror maker change
 2. KAFKA-1839: To allow partition aware mirror.
 3. KAFKA-1840: To allow message filtering/format conversion
 Feedbacks are welcome. Please let us know if you have any questions or
concerns.

 Thanks.

 Jiangjie (Becket) Qin



[jira] [Commented] (KAFKA-1886) SimpleConsumer swallowing ClosedByInterruptException

2015-01-21 Thread Chris Riccomini (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14286326#comment-14286326
 ] 

Chris Riccomini commented on KAFKA-1886:


IMO, the SimpleConsumer should at least throw the proper exception.

 SimpleConsumer swallowing ClosedByInterruptException
 

 Key: KAFKA-1886
 URL: https://issues.apache.org/jira/browse/KAFKA-1886
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Reporter: Aditya A Auradkar
Assignee: Jun Rao

 This issue was originally reported by a Samza developer. I've included an 
 exchange of mine with Chris Riccomini. I'm trying to reproduce the problem on 
 my dev setup.
 From: criccomi
 Hey all,
 Samza's BrokerProxy [1] threads appear to be wedging randomly when we try to 
 interrupt its fetcher thread. I noticed that SimpleConsumer.scala catches 
 Throwable in its sendRequest method [2]. I'm wondering: if 
 blockingChannel.send/receive throws a ClosedByInterruptException
 when the thread is interrupted, what happens? It looks like sendRequest will 
 catch the exception (which I
 think clears the thread's interrupted flag), and then retries the send. If 
 the send succeeds on the retry, I think that the ClosedByInterruptException 
 exception is effectively swallowed, and the BrokerProxy will continue
 fetching messages as though its thread was never interrupted.
 Am I misunderstanding how things work?
 Cheers,
 Chris
 [1] 
 https://github.com/apache/incubator-samza/blob/master/samza-kafka/src/main/scala/org/apache/samza/system/kafka/BrokerProxy.scala#L126
 [2] 
 https://github.com/apache/kafka/blob/0.8.1/core/src/main/scala/kafka/consumer/SimpleConsumer.scala#L75



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIPs

2015-01-21 Thread Jay Kreps
Hey Gwen,

Could we get the actual changes in that KIP? I.e. changes to metadata
request, changes to UpdateMetadataRequest, new configs and what will their
valid values be, etc. This kind of says that those things will change but
doesn't say what they will change to...

-Jay

On Mon, Jan 19, 2015 at 9:45 PM, Gwen Shapira gshap...@cloudera.com wrote:

 I created a KIP for the multi-port broker change.

 https://cwiki.apache.org/confluence/display/KAFKA/KIP-2+-+Refactor+brokers+to+allow+listening+on+multiple+ports+and+IPs

 I'm not re-opening the discussion, since it was agreed on over a month
 ago and implementation is close to complete (I hope!). Lets consider
 this voted and accepted?

 Gwen

 On Sun, Jan 18, 2015 at 10:31 AM, Jay Kreps jay.kr...@gmail.com wrote:
  Great! Sounds like everyone is on the same page
 
 - I created a template page to make things easier. If you do
 Tools-Copy
 on this page you can just fill in the italic portions with your
 details.
 - I retrofitted KIP-1 to match this formatting
 - I added the metadata section people asked for (a link to the
 discussion, the JIRA, and the current status). Let's make sure we
 remember
 to update the current status as things are figured out.
 - Let's keep the discussion on the mailing list rather than on the
 wiki
 pages. It makes sense to do one or the other so all the comments are
 in one
 place and I think prior experience is that the wiki comments are the
 worse
 way.
 
  I think it would be great do KIPs for some of the in-flight items folks
  mentioned.
 
  -Jay
 
  On Sat, Jan 17, 2015 at 8:23 AM, Gwen Shapira gshap...@cloudera.com
 wrote:
 
  +1
 
  Will be happy to provide a KIP for the multiple-listeners patch.
 
  Gwen
 
  On Sat, Jan 17, 2015 at 8:10 AM, Joe Stein joe.st...@stealth.ly
 wrote:
   +1 to everything we have been saying and where this (has settled
 to)/(is
   settling to).
  
   I am sure other folks have some more feedback and think we should try
 to
   keep this discussion going if need be. I am also a firm believer of
 form
   following function so kicking the tires some to flesh out the details
 of
   this and have some organic growth with the process will be healthy and
   beneficial to the community.
  
   For my part, what I will do is open a few KIP based on some of the
 work I
   have been involved with for 0.8.3. Off the top of my head this would
   include 1) changes to re-assignment of partitions 2) kafka cli 3)
 global
   configs 4) security white list black list by ip 5) SSL 6) We probably
  will
   have lots of Security related KIPs and should treat them all
 individually
   when the time is appropriate 7) Kafka on Mesos.
  
   If someone else wants to jump in to start getting some of the security
  KIP
   that we are going to have in 0.8.3 I think that would be great (e.g.
   Multiple Listeners for Kafka Brokers). There are also a few other
  tickets I
   can think of that are important to have in the code in 0.8.3 that
 should
   have KIP also that I haven't really been involved in. I will take a
 few
   minutes and go through JIRA (one I can think of like auto assign id
 that
  is
   already committed I think) and ask for a KIP if appropriate or if I
 feel
   that I can write it up (both from a time and understanding
 perspective)
  do
   so.
  
   long story short, I encourage folks to start moving ahead with the KIP
  for
   0.8.3 as how we operate. any objections?
  
   On Fri, Jan 16, 2015 at 2:40 PM, Guozhang Wang wangg...@gmail.com
  wrote:
  
   +1 on the idea, and we could mutually link the KIP wiki page with the
  the
   created JIRA ticket (i.e. include the JIRA number on the page and the
  KIP
   url on the ticket description).
  
   Regarding the KIP process, probably we do not need two phase
  communication
   of a [DISCUSS] followed by [VOTE], as Jay said the voting should be
  clear
   while people discuss about that.
  
   About who should trigger the process, I think the only involved
 people
   would be 1) when the patch is submitted / or even the ticket is
 created,
   the assignee could choose to start the KIP process if she thought it
 is
   necessary; 2) the reviewer of the patch can also suggest starting KIP
   discussions.
  
   On Fri, Jan 16, 2015 at 10:49 AM, Gwen Shapira 
 gshap...@cloudera.com
   wrote:
  
+1 to Ewen's suggestions: Deprecation, status and version.
   
Perhaps add the JIRA where the KIP was implemented to the metadata.
This will help tie things together.
   
On Fri, Jan 16, 2015 at 9:35 AM, Ewen Cheslack-Postava
e...@confluent.io wrote:
 I think adding a section about deprecation would be helpful. A
 good
 fraction of the time I would expect the goal of a KIP is to fix
 or
replace
 older functionality that needs continued support for
 compatibility,
  but
 should eventually be phased out. This helps Kafka devs understand
  how
long
 they'll end up supporting multiple 

Re: [KIP-DISCUSSION] Mirror Maker Enhancement

2015-01-21 Thread Jiangjie Qin
Currently it is a manual process. For functional test, I just setup two
Kafka clusters locally, mirror between them and keep producing data to one
of the cluster. Then try a hard kill / bounce mirror maker to see if
messages are lost in target cluster.

Jiangjie (Becket) Qin

On 1/21/15, 12:24 PM, Gwen Shapira gshap...@cloudera.com wrote:

Thanks for the answers. Much clearer now :)

Unrelated question: How do you test MirrorMaker (especially around data
loss)?
I didn't see any unit-tests or integration tests in trunk.

Gwen

On Wed, Jan 21, 2015 at 9:55 AM, Jiangjie Qin j...@linkedin.com.invalid
wrote:
 Hi Gwen,

 Please see inline answers. I¹ll update them in the KIP as well.

 Thanks.

 Jiangjie (Becket) Qin

 On 1/20/15, 6:39 PM, Gwen Shapira gshap...@cloudera.com wrote:

Thanks for the detailed document, Jiangjie. Super helpful.

Few questions:

1. You mention that A ConsumerRebalanceListener class is created and
could be wired into ZookeeperConsumerConnector to avoid duplicate
messages when consumer rebalance occurs in mirror maker.

Is this something the user needs to do or configure? or is the wiring
of rebalance listener into the zookeeper consumer will be part of the
enhancement?
In other words, will we need to do anything extra to avoid duplicates
during rebalance in MirrorMaker?
 For ZookeeperConsumerConnector in general, users need to wire in
listener
 by themselves in code.
 For Mirror Maker, an internal rebalance listener has been wired in by
 default to avoid duplicates on consumer rebalance. User could still
 specify a custom listener class in command line argument, the internal
 rebalance listener will call that listener after it finishes the default
 logic.

2. The only source of truth for offsets in consume-then-send pattern
is end user. - I assume you don't mean an actual person, right? So
what does end user refer to? Can you clarify when will the offset
commit thread commit offsets? And which JIRA implements this?
 By end user I mean the target cluster here. The offset commit thread
 commit thread periodically. It only commit the offsets that have been
 acked.

3. Maintaining message order - In which JIRA do we implement this part?
 KAFKA-1650

Again, thanks a lot for documenting this and even more for the
implementation - it is super important for many use cases.

Gwen


Gwen

On Tue, Jan 20, 2015 at 4:31 PM, Jiangjie Qin
j...@linkedin.com.invalid
wrote:
 Hi Kafka Devs,

 We are working on Kafka Mirror Maker enhancement. A KIP is posted to
document and discuss on the followings:
 1. KAFKA-1650: No Data loss mirror maker change
 2. KAFKA-1839: To allow partition aware mirror.
 3. KAFKA-1840: To allow message filtering/format conversion
 Feedbacks are welcome. Please let us know if you have any questions or
concerns.

 Thanks.

 Jiangjie (Becket) Qin




Re: [KIP-DISCUSSION] Mirror Maker Enhancement

2015-01-21 Thread Jay Kreps
Hey guys,

A couple questions/comments:

1. The callback and user-controlled commit offset functionality is already
in the new consumer which we are working on in parallel. If we accelerated
that work it might help concentrate efforts. I admit this might take
slightly longer in calendar time but could still probably get done this
quarter. Have you guys considered that approach?

2. I think partitioning on the hash of the topic partition is not a very
good idea because that will make the case of going from a cluster with
fewer partitions to one with more partitions not work. I think an intuitive
way to do this would be the following:
a. Default behavior: Just do what the producer does. I.e. if you specify a
key use it for partitioning, if not just partition in a round-robin fashion.
b. Add a --preserve-partition option that will explicitly inherent the
partition from the source irrespective of whether there is a key or which
partition that key would hash to.

3. You don't actually give the ConsumerRebalanceListener interface. What is
that going to look like?

4. What is MirrorMakerRecord? I think ideally the MirrorMakerMessageHandler
interface would take a ConsumerRecord as input and return a ProducerRecord,
right? That would allow you to transform the key, value, partition, or
destination topic...

5. Have you guys thought about what the implementation will look like in
terms of threading architecture etc with the new consumer? That will be
soon so even if we aren't starting with that let's make sure we can get rid
of a lot of the current mirror maker accidental complexity in terms of
threads and queues when we move to that.

-Jay

On Tue, Jan 20, 2015 at 4:31 PM, Jiangjie Qin j...@linkedin.com.invalid
wrote:

 Hi Kafka Devs,

 We are working on Kafka Mirror Maker enhancement. A KIP is posted to
 document and discuss on the followings:
 1. KAFKA-1650: No Data loss mirror maker change
 2. KAFKA-1839: To allow partition aware mirror.
 3. KAFKA-1840: To allow message filtering/format conversion
 Feedbacks are welcome. Please let us know if you have any questions or
 concerns.

 Thanks.

 Jiangjie (Becket) Qin



[jira] [Commented] (KAFKA-1886) SimpleConsumer swallowing ClosedByInterruptException

2015-01-21 Thread Aditya A Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14286287#comment-14286287
 ] 

Aditya A Auradkar commented on KAFKA-1886:
--

[~junrao] any thoughts?

 SimpleConsumer swallowing ClosedByInterruptException
 

 Key: KAFKA-1886
 URL: https://issues.apache.org/jira/browse/KAFKA-1886
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Reporter: Aditya A Auradkar
Assignee: Jun Rao

 This issue was originally reported by a Samza developer. I've included an 
 exchange of mine with Chris Riccomini. I'm trying to reproduce the problem on 
 my dev setup.
 From: criccomi
 Hey all,
 Samza's BrokerProxy [1] threads appear to be wedging randomly when we try to 
 interrupt its fetcher thread. I noticed that SimpleConsumer.scala catches 
 Throwable in its sendRequest method [2]. I'm wondering: if 
 blockingChannel.send/receive throws a ClosedByInterruptException
 when the thread is interrupted, what happens? It looks like sendRequest will 
 catch the exception (which I
 think clears the thread's interrupted flag), and then retries the send. If 
 the send succeeds on the retry, I think that the ClosedByInterruptException 
 exception is effectively swallowed, and the BrokerProxy will continue
 fetching messages as though its thread was never interrupted.
 Am I misunderstanding how things work?
 Cheers,
 Chris
 [1] 
 https://github.com/apache/incubator-samza/blob/master/samza-kafka/src/main/scala/org/apache/samza/system/kafka/BrokerProxy.scala#L126
 [2] 
 https://github.com/apache/kafka/blob/0.8.1/core/src/main/scala/kafka/consumer/SimpleConsumer.scala#L75



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


NIO and Threading implementation

2015-01-21 Thread Chittaranjan Hota
Hello,
Congratulations to the folks behind kafka. Its has been a smooth ride
dealing with multi TB data when the same set up in JMS fell apart often.

Although I have been using kafka for more than a few days now, started
looking into the code base since yesterday and already have doubts at the
very beginning. Would need some inputs on why the implementation is done
the way it is.

Version : 0.8.1

THREADING RELATED
1. Why in the start up code synchronized? Who are the competing threads?
a. startReporters func is synchronized
b. KafkaScheduler startup is synchronized? There is also a volatile
variable declared when the whole synchronized block is itself guaranteeing
happens before.
   c. Use of native new Thread syntax instead of relying on Executor service
   d. processor thread uses a couthdownlatch but main thread doesnt await
for processors to signal that startup is complete.


NIO RELATED
2.
   a. Acceptor, and each Processor thread have their own selector (since
they are extending from abstract class AbstractServerThread). Ideally a
single selector suffices multiplexing. Is there any reason why multiple
selectors are used?
   b. selector wake up calls by Processors in the read method (line 362
SocketServer.scala) are MISSED calls since there is no thread waiting on
the select at that point.

Looking forward to learning the code further!
Thanks in advance.

Regards,
Chitta


Re: [kafka-clients] Re: Heads up: KAFKA-1697 - remove code related to ack1 on the broker

2015-01-21 Thread Gwen Shapira
We have the new warning in 0.8.2.

I updated KIP-1 with the new plan:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-1+-+Remove+support+of+request.required.acks

I'm waiting a day for additional discussion and if there are no
replies, I'll send the [VOTE] email.

Gwen

On Mon, Jan 19, 2015 at 10:00 AM, Gwen Shapira gshap...@cloudera.com wrote:
 Sounds good to me.
 I'll open a new JIRA for 0.8.2 with just an extra log warning, to
 avoid making KAFKA-1697 any more confusing.

 On Mon, Jan 19, 2015 at 9:46 AM, Joe Stein joe.st...@stealth.ly wrote:
  For 2, how about we make a change to log a warning for ack  1 in 0.8.2
 and then drop the ack  1 support in trunk (w/o bumping up the protocol
 version)?

 +1


 On Mon, Jan 19, 2015 at 12:35 PM, Jun Rao j...@confluent.io wrote:

 For 2, how about we make a change to log a warning for ack  1 in 0.8.2
 and then drop the ack  1 support in trunk (w/o bumping up the protocol
 version)? Thanks,

 Jun

 On Sun, Jan 18, 2015 at 8:24 PM, Gwen Shapira gshap...@cloudera.com
 wrote:

 Overall, agree on point #1, less sure on point #2.

 1. Some protocols never ever add new errors, while others add errors
 without bumping versions. HTTP is a good example of the second type.
 HTTP-451 was added fairly recently, there are some errors specific to
 NGINX, etc. No one cares. I think we should properly document in the
 wire-protocol doc that new errors can be added, and I think we should
 strongly suggest (and implement ourselves) that unknown error codes
 should be shown to users (or at least logged), so they can be googled
 and understood through our documentation.
 In addition, hierarchy of error codes, so clients will know if an
 error is retry-able just by looking at the code could be nice. Same
 for adding an error string to the protocol. These are future
 enhancements that should be discussed separately.

 2. I think we want to allow admins to upgrade their Kafka brokers
 without having to chase down clients in their organization and without
 getting blamed if clients break. I think it makes sense to have one
 version that will support existing behavior, but log warnings, so
 admins will know about misbehaving clients and can track them down
 before an upgrade that breaks them (or before the broken config causes
 them to lose data!). Hopefully this is indeed a very rare behavior and
 we are taking extra precaution for nothing, but I have customers where
 one traumatic upgrade means they will never upgrade a Kafka again, so
 I'm being conservative.

 Gwen


 On Sun, Jan 18, 2015 at 3:50 PM, Jun Rao j...@confluent.io wrote:
  Overall, I agree with Jay on both points.
 
  1. I think it's reasonable to add new error codes w/o bumping up the
  protocol version. In most cases, by adding new error codes, we are just
  refining the categorization of those unknown errors. So, a client
  shouldn't
  behave worse than before as long as unknown errors have been properly
  handled.
 
  2. I think it's reasonable to just document that 0.8.2 will be the last
  release that will support ack  1 and remove the support completely in
  trunk
  w/o bumping up the protocol. This is because (a) we never included ack
   1
  explicitly in the documentation and so the usage should be limited; (2)
  ack
  1 doesn't provide the guarantee that people really want and so it
  shouldn't really be used.
 
  Thanks,
 
  Jun
 
 
  On Sun, Jan 18, 2015 at 11:03 AM, Jay Kreps jay.kr...@gmail.com
  wrote:
 
  Hey guys,
 
  I really think we are discussing two things here:
 
  How should we generally handle changes to the set of errors? Should
  introducing new errors be considered a protocol change or should we
  reserve
  the right to introduce new error codes?
  Given that this particular change is possibly incompatible, how should
  we
  handle it?
 
  I think it would be good for people who are responding here to be
  specific
  about which they are addressing.
 
  Here is what I think:
 
  1. Errors should be extensible within a protocol version.
 
  We should change the protocol documentation to list the errors that
  can be
  given back from each api, their meaning, and how to handle them, BUT
  we
  should explicitly state that the set of errors are open ended. That is
  we
  should reserve the right to introduce new errors and explicitly state
  that
  clients need a blanket unknown error handling mechanism. The error
  can
  link to the protocol definition (something like Unknown error 42, see
  protocol definition at http://link;). We could make this work really
  well by
  instructing all the clients to report the error in a very googlable
  way as
  Oracle does with their error format (e.g. ORA-32) so that if you
  ever get
  the raw error google will take you to the definition.
 
  I agree that a more rigid definition seems like right thing, but
  having
  just implemented two clients and spent a bunch of time on the server
  side, I
  think, it will work out poorly in practice. Here is 

Re: Review Request 29728: Patch for KAFKA-1848

2015-01-21 Thread Guozhang Wang

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29728/#review69107
---

Ship it!


Ship It!

- Guozhang Wang


On Jan. 8, 2015, 10:49 p.m., Aditya Auradkar wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/29728/
 ---
 
 (Updated Jan. 8, 2015, 10:49 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1848
 https://issues.apache.org/jira/browse/KAFKA-1848
 
 
 Repository: kafka
 
 
 Description
 ---
 
 Fix for KAFKA-1848.
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
 191a8677444e53b043e9ad6e94c5a9191c32599e 
 
 Diff: https://reviews.apache.org/r/29728/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Aditya Auradkar
 




[jira] [Commented] (KAFKA-1697) remove code related to ack1 on the broker

2015-01-21 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14286676#comment-14286676
 ] 

Jun Rao commented on KAFKA-1697:


From the discussion in the mailing list, we decided not to bump up the version 
for the ProduceRequest. Instead, we will log a warning in 0.8.2 that ack1 
will no longer to supported. In 0.8.3, we will throw an exception to requests 
with ack1 and remove the support from the code. [~gwenshap], do you want to 
update KIP-1 in the wiki accordingly?

 remove code related to ack1 on the broker
 --

 Key: KAFKA-1697
 URL: https://issues.apache.org/jira/browse/KAFKA-1697
 Project: Kafka
  Issue Type: Bug
Reporter: Jun Rao
Assignee: Gwen Shapira
 Fix For: 0.8.3

 Attachments: KAFKA-1697.patch, KAFKA-1697_2015-01-14_15:41:37.patch


 We removed the ack1 support from the producer client in kafka-1555. We can 
 completely remove the code in the broker that supports ack1.
 Also, we probably want to make NotEnoughReplicasAfterAppend a non-retriable 
 exception and let the client decide what to do.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 27391: Fix KAFKA-1634

2015-01-21 Thread Guozhang Wang


 On Jan. 20, 2015, 4:35 p.m., Joel Koshy wrote:
  core/src/main/scala/kafka/server/KafkaApis.scala, line 147
  https://reviews.apache.org/r/27391/diff/8/?file=822017#file822017line147
 
  I just realized that if we have a v0 or v1 request then we use the 
  offset manager default retention which is one day.
  
  However, if it is v2 and the user does not override it in the offset 
  commit request, then the retention defaults to Long.MaxValue. I think that 
  default makes sense for OffsetCommitRequest. However, I think the broker 
  needs to protect itself and have an upper threshold for retention. i.e., 
  maybe we should have a maxRetentionMs config in the broker.
  
  What do you think?

Agreed, I change the behavior to be use the default value if it is  v2 or if 
the retention period is default value (meaning user did not specify it).


- Guozhang


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27391/#review68729
---


On Jan. 14, 2015, 11:50 p.m., Guozhang Wang wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/27391/
 ---
 
 (Updated Jan. 14, 2015, 11:50 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1634
 https://issues.apache.org/jira/browse/KAFKA-1634
 
 
 Repository: kafka
 
 
 Description
 ---
 
 Incorporated Joel and Jun's comments
 
 
 Diffs
 -
 
   clients/src/main/java/org/apache/kafka/common/protocol/Protocol.java 
 7517b879866fc5dad5f8d8ad30636da8bbe7784a 
   clients/src/main/java/org/apache/kafka/common/protocol/types/Struct.java 
 121e880a941fcd3e6392859edba11a94236494cc 
   
 clients/src/main/java/org/apache/kafka/common/requests/OffsetCommitRequest.java
  3ee5cbad55ce836fd04bb954dcf6ef2f9bc3288f 
   
 clients/src/test/java/org/apache/kafka/common/requests/RequestResponseTest.java
  df37fc6d8f0db0b8192a948426af603be3444da4 
   core/src/main/scala/kafka/api/OffsetCommitRequest.scala 
 050615c72efe7dbaa4634f53943bd73273d20ffb 
   core/src/main/scala/kafka/api/OffsetFetchRequest.scala 
 c7604b9cdeb8f46507f04ed7d35fcb3d5f150713 
   core/src/main/scala/kafka/common/OffsetMetadataAndError.scala 
 4cabffeacea09a49913505db19a96a55d58c0909 
   core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
 191a8677444e53b043e9ad6e94c5a9191c32599e 
   core/src/main/scala/kafka/server/KafkaApis.scala 
 c011a1b79bd6c4e832fe7d097daacb0d647d1cd4 
   core/src/main/scala/kafka/server/KafkaServer.scala 
 a069eb9272c92ef62387304b60de1fe473d7ff49 
   core/src/main/scala/kafka/server/OffsetManager.scala 
 3c79428962604800983415f6f705e04f52acb8fb 
   core/src/main/scala/kafka/server/ReplicaManager.scala 
 e58fbb922e93b0c31dff04f187fcadb4ece986d7 
   core/src/test/scala/unit/kafka/api/RequestResponseSerializationTest.scala 
 cd16ced5465d098be7a60498326b2a98c248f343 
   core/src/test/scala/unit/kafka/server/OffsetCommitTest.scala 
 4a3a5b264a021e55c39f4d7424ce04ee591503ef 
   core/src/test/scala/unit/kafka/server/ServerShutdownTest.scala 
 ba1e48e4300c9fb32e36e7266cb05294f2a481e5 
 
 Diff: https://reviews.apache.org/r/27391/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Guozhang Wang
 




[jira] [Commented] (KAFKA-1697) remove code related to ack1 on the broker

2015-01-21 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14286696#comment-14286696
 ] 

Gwen Shapira commented on KAFKA-1697:
-

Updated! 

 remove code related to ack1 on the broker
 --

 Key: KAFKA-1697
 URL: https://issues.apache.org/jira/browse/KAFKA-1697
 Project: Kafka
  Issue Type: Bug
Reporter: Jun Rao
Assignee: Gwen Shapira
 Fix For: 0.8.3

 Attachments: KAFKA-1697.patch, KAFKA-1697_2015-01-14_15:41:37.patch


 We removed the ack1 support from the producer client in kafka-1555. We can 
 completely remove the code in the broker that supports ack1.
 Also, we probably want to make NotEnoughReplicasAfterAppend a non-retriable 
 exception and let the client decide what to do.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 30126: Patch for KAFKA-1845

2015-01-21 Thread Eric Olander

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30126/#review69063
---



core/src/main/scala/kafka/server/KafkaConfig.scala
https://reviews.apache.org/r/30126/#comment113635

It seems that by convention there is a ...Prop and a ...Doc constant, but 
nothing enforces that.  Maybe have 
val ZKConnect = (zookeeper.connect, Zookeeper host string) 
so it is more apparent that these two values are needed and related.  A 
utility class would be better than using a Tuple2, but that's the general idea.



core/src/main/scala/kafka/server/KafkaConfig.scala
https://reviews.apache.org/r/30126/#comment113634

Maybe some helper functions could help with this code:

def stringProp(prop: String) = parsed.get(prop).asInstanceOf[String]

then:
zkConnect = stringProp(ZkConnectProp)


- Eric Olander


On Jan. 21, 2015, 5:49 p.m., Andrii Biletskyi wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/30126/
 ---
 
 (Updated Jan. 21, 2015, 5:49 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1845
 https://issues.apache.org/jira/browse/KAFKA-1845
 
 
 Repository: kafka
 
 
 Description
 ---
 
 KAFKA-1845 - Fixed merge conflicts, ported added configs to KafkaConfig
 
 
 KAFKA-1845 - KafkaConfig to ConfigDef: moved validateValues so it's called on 
 instantiating KafkaConfig
 
 
 KAFKA-1845 - KafkaConfig to ConfigDef: MaxConnectionsPerIpOverrides refactored
 
 
 Diffs
 -
 
   clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java 
 98cb79b701918eca3f6ca9823b6c7b7c97b3ecec 
   core/src/main/scala/kafka/Kafka.scala 
 77a49e12af6f869e63230162e9f87a7b0b12b610 
   core/src/main/scala/kafka/controller/KafkaController.scala 
 66df6d2fbdbdd556da6bea0df84f93e0472c8fbf 
   core/src/main/scala/kafka/controller/PartitionLeaderSelector.scala 
 4a31c7271c2d0a4b9e8b28be729340ecfa0696e5 
   core/src/main/scala/kafka/server/KafkaConfig.scala 
 6d74983472249eac808d361344c58cc2858ec971 
   core/src/main/scala/kafka/server/KafkaServer.scala 
 89200da30a04943f0b9befe84ab17e62b747c8c4 
   core/src/main/scala/kafka/server/ReplicaFetcherThread.scala 
 6879e730282185bda3d6bc3659cb15af0672cecf 
   core/src/test/scala/integration/kafka/api/ProducerCompressionTest.scala 
 e63558889272bc76551accdfd554bdafde2e0dd6 
   core/src/test/scala/integration/kafka/api/ProducerFailureHandlingTest.scala 
 90c0b7a19c7af8e5416e4bdba62b9824f1abd5ab 
   core/src/test/scala/integration/kafka/api/ProducerSendTest.scala 
 b15237b76def3b234924280fa3fdb25dbb0cc0dc 
   core/src/test/scala/unit/kafka/admin/AddPartitionsTest.scala 
 1bf2667f47853585bc33ffb3e28256ec5f24ae84 
   core/src/test/scala/unit/kafka/admin/AdminTest.scala 
 e28979827110dfbbb92fe5b152e7f1cc973de400 
   core/src/test/scala/unit/kafka/admin/DeleteTopicTest.scala 
 33c27678bf8ae8feebcbcdaa4b90a1963157b4a5 
   core/src/test/scala/unit/kafka/consumer/ConsumerIteratorTest.scala 
 c0355cc0135c6af2e346b4715659353a31723b86 
   
 core/src/test/scala/unit/kafka/consumer/ZookeeperConsumerConnectorTest.scala 
 a17e8532c44aadf84b8da3a57bcc797a848b5020 
   core/src/test/scala/unit/kafka/integration/AutoOffsetResetTest.scala 
 95303e098d40cd790fb370e9b5a47d20860a6da3 
   core/src/test/scala/unit/kafka/integration/FetcherTest.scala 
 25845abbcad2e79f56f729e59239b738d3ddbc9d 
   core/src/test/scala/unit/kafka/integration/PrimitiveApiTest.scala 
 a5386a03b62956bc440b40783247c8cdf7432315 
   core/src/test/scala/unit/kafka/integration/RollingBounceTest.scala 
 eab4b5f619015af42e4554660eafb5208e72ea33 
   core/src/test/scala/unit/kafka/integration/TopicMetadataTest.scala 
 35dc071b1056e775326981573c9618d8046e601d 
   core/src/test/scala/unit/kafka/integration/UncleanLeaderElectionTest.scala 
 ba3bcdcd1de9843e75e5395dff2fc31b39a5a9d5 
   
 core/src/test/scala/unit/kafka/javaapi/consumer/ZookeeperConsumerConnectorTest.scala
  d6248b09bb0f86ee7d3bd0ebce5b99135491453b 
   core/src/test/scala/unit/kafka/log/LogTest.scala 
 c2dd8eb69da8c0982a0dd20231c6f8bd58eb623e 
   core/src/test/scala/unit/kafka/log4j/KafkaLog4jAppenderTest.scala 
 4ea0489c9fd36983fe190491a086b39413f3a9cd 
   core/src/test/scala/unit/kafka/metrics/MetricsTest.scala 
 3cf23b3d6d4460535b90cfb36281714788fc681c 
   core/src/test/scala/unit/kafka/producer/AsyncProducerTest.scala 
 1db6ac329f7b54e600802c8a623f80d159d4e69b 
   core/src/test/scala/unit/kafka/producer/ProducerTest.scala 
 ce65dab4910d9182e6774f6ef1a7f45561ec0c23 
   core/src/test/scala/unit/kafka/producer/SyncProducerTest.scala 
 d60d8e0f49443f4dc8bc2cad6e2f951eda28f5cb 
   core/src/test/scala/unit/kafka/server/AdvertiseBrokerTest.scala 
 f0c4a56b61b4f081cf4bee799c6e9c523ff45e19 
   

[jira] [Commented] (KAFKA-1782) Junit3 Misusage

2015-01-21 Thread Jeff Holoman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14286491#comment-14286491
 ] 

Jeff Holoman commented on KAFKA-1782:
-

Thank you for the feedback [~guozhang]. I will get to work on this.

 Junit3 Misusage
 ---

 Key: KAFKA-1782
 URL: https://issues.apache.org/jira/browse/KAFKA-1782
 Project: Kafka
  Issue Type: Bug
Reporter: Guozhang Wang
Assignee: Jeff Holoman
  Labels: newbie
 Fix For: 0.8.3


 This is found while I was working on KAFKA-1580: in many of our cases where 
 we explicitly extend from junit3suite (e.g. ProducerFailureHandlingTest), we 
 are actually misusing a bunch of features that only exist in Junit4, such as 
 (expected=classOf). For example, the following code
 {code}
 import org.scalatest.junit.JUnit3Suite
 import org.junit.Test
 import java.io.IOException
 class MiscTest extends JUnit3Suite {
   @Test (expected = classOf[IOException])
   def testSendOffset() {
   }
 }
 {code}
 will actually pass even though IOException was not thrown since this 
 annotation is not supported in Junit3. Whereas
 {code}
 import org.junit._
 import java.io.IOException
 class MiscTest extends JUnit3Suite {
   @Test (expected = classOf[IOException])
   def testSendOffset() {
   }
 }
 {code}
 or
 {code}
 import org.scalatest.junit.JUnitSuite
 import org.junit._
 import java.io.IOException
 class MiscTest extends JUnit3Suite {
   @Test (expected = classOf[IOException])
   def testSendOffset() {
   }
 }
 {code}
 or
 {code}
 import org.junit._
 import java.io.IOException
 class MiscTest {
   @Test (expected = classOf[IOException])
   def testSendOffset() {
   }
 }
 {code}
 will fail.
 I would propose to not rely on Junit annotations other than @Test itself but 
 use scala unit test annotations instead, for example:
 {code}
 import org.junit._
 import java.io.IOException
 class MiscTest {
   @Test
   def testSendOffset() {
 intercept[IOException] {
   //nothing
 }
   }
 }
 {code}
 will fail with a clearer stacktrace.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1729) add doc for Kafka-based offset management in 0.8.2

2015-01-21 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14286651#comment-14286651
 ] 

Jun Rao commented on KAFKA-1729:


[~jjkoshy], are you on track to complete the doc for the 0.8.2 release?

 add doc for Kafka-based offset management in 0.8.2
 --

 Key: KAFKA-1729
 URL: https://issues.apache.org/jira/browse/KAFKA-1729
 Project: Kafka
  Issue Type: Sub-task
Reporter: Jun Rao
Assignee: Joel Koshy
 Fix For: 0.8.2

 Attachments: KAFKA-1782-doc-v1.patch, KAFKA-1782-doc-v2.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 27391: Fix KAFKA-1634

2015-01-21 Thread Guozhang Wang

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27391/
---

(Updated Jan. 22, 2015, 12:43 a.m.)


Review request for kafka.


Bugs: KAFKA-1634
https://issues.apache.org/jira/browse/KAFKA-1634


Repository: kafka


Description (updated)
---

Incorporated Joel's comments


Diffs (updated)
-

  clients/src/main/java/org/apache/kafka/common/protocol/Protocol.java 
7517b879866fc5dad5f8d8ad30636da8bbe7784a 
  clients/src/main/java/org/apache/kafka/common/protocol/types/Struct.java 
121e880a941fcd3e6392859edba11a94236494cc 
  
clients/src/main/java/org/apache/kafka/common/requests/OffsetCommitRequest.java 
3ee5cbad55ce836fd04bb954dcf6ef2f9bc3288f 
  
clients/src/test/java/org/apache/kafka/common/requests/RequestResponseTest.java 
df37fc6d8f0db0b8192a948426af603be3444da4 
  core/src/main/scala/kafka/api/OffsetCommitRequest.scala 
050615c72efe7dbaa4634f53943bd73273d20ffb 
  core/src/main/scala/kafka/api/OffsetFetchRequest.scala 
c7604b9cdeb8f46507f04ed7d35fcb3d5f150713 
  core/src/main/scala/kafka/common/OffsetMetadataAndError.scala 
4cabffeacea09a49913505db19a96a55d58c0909 
  core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
191a8677444e53b043e9ad6e94c5a9191c32599e 
  core/src/main/scala/kafka/server/KafkaApis.scala 
ec8d9f7ba44741db40875458bd524c4062ad6a26 
  core/src/main/scala/kafka/server/KafkaConfig.scala 
6d74983472249eac808d361344c58cc2858ec971 
  core/src/main/scala/kafka/server/KafkaServer.scala 
89200da30a04943f0b9befe84ab17e62b747c8c4 
  core/src/main/scala/kafka/server/OffsetManager.scala 
0bdd42fea931cddd072c0fff765b10526db6840a 
  core/src/main/scala/kafka/server/ReplicaManager.scala 
e58fbb922e93b0c31dff04f187fcadb4ece986d7 
  core/src/test/scala/unit/kafka/api/RequestResponseSerializationTest.scala 
cd16ced5465d098be7a60498326b2a98c248f343 
  core/src/test/scala/unit/kafka/server/OffsetCommitTest.scala 
5b93239cdc26b5be7696f4e7863adb9fbe5f0ed5 
  core/src/test/scala/unit/kafka/server/ServerShutdownTest.scala 
ba1e48e4300c9fb32e36e7266cb05294f2a481e5 

Diff: https://reviews.apache.org/r/27391/diff/


Testing
---


Thanks,

Guozhang Wang



[jira] [Commented] (KAFKA-1634) Improve semantics of timestamp in OffsetCommitRequests and update documentation

2015-01-21 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14286662#comment-14286662
 ] 

Guozhang Wang commented on KAFKA-1634:
--

Updated reviewboard https://reviews.apache.org/r/27391/diff/
 against branch origin/trunk

 Improve semantics of timestamp in OffsetCommitRequests and update 
 documentation
 ---

 Key: KAFKA-1634
 URL: https://issues.apache.org/jira/browse/KAFKA-1634
 Project: Kafka
  Issue Type: Bug
Reporter: Neha Narkhede
Assignee: Guozhang Wang
Priority: Blocker
 Fix For: 0.8.3

 Attachments: KAFKA-1634.patch, KAFKA-1634_2014-11-06_15:35:46.patch, 
 KAFKA-1634_2014-11-07_16:54:33.patch, KAFKA-1634_2014-11-17_17:42:44.patch, 
 KAFKA-1634_2014-11-21_14:00:34.patch, KAFKA-1634_2014-12-01_11:44:35.patch, 
 KAFKA-1634_2014-12-01_18:03:12.patch, KAFKA-1634_2015-01-14_15:50:15.patch, 
 KAFKA-1634_2015-01-21_16:43:01.patch


 From the mailing list -
 following up on this -- I think the online API docs for OffsetCommitRequest
 still incorrectly refer to client-side timestamps:
 https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol#AGuideToTheKafkaProtocol-OffsetCommitRequest
 Wasn't that removed and now always handled server-side now?  Would one of
 the devs mind updating the API spec wiki?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1634) Improve semantics of timestamp in OffsetCommitRequests and update documentation

2015-01-21 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-1634:
-
Attachment: KAFKA-1634_2015-01-21_16:43:01.patch

 Improve semantics of timestamp in OffsetCommitRequests and update 
 documentation
 ---

 Key: KAFKA-1634
 URL: https://issues.apache.org/jira/browse/KAFKA-1634
 Project: Kafka
  Issue Type: Bug
Reporter: Neha Narkhede
Assignee: Guozhang Wang
Priority: Blocker
 Fix For: 0.8.3

 Attachments: KAFKA-1634.patch, KAFKA-1634_2014-11-06_15:35:46.patch, 
 KAFKA-1634_2014-11-07_16:54:33.patch, KAFKA-1634_2014-11-17_17:42:44.patch, 
 KAFKA-1634_2014-11-21_14:00:34.patch, KAFKA-1634_2014-12-01_11:44:35.patch, 
 KAFKA-1634_2014-12-01_18:03:12.patch, KAFKA-1634_2015-01-14_15:50:15.patch, 
 KAFKA-1634_2015-01-21_16:43:01.patch


 From the mailing list -
 following up on this -- I think the online API docs for OffsetCommitRequest
 still incorrectly refer to client-side timestamps:
 https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol#AGuideToTheKafkaProtocol-OffsetCommitRequest
 Wasn't that removed and now always handled server-side now?  Would one of
 the devs mind updating the API spec wiki?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[DISCUSS] KIP-6 - New reassignment partition logic for re-balancing

2015-01-21 Thread Joe Stein
Posted a KIP for --re-balance for partition assignment in reassignment tool.

https://cwiki.apache.org/confluence/display/KAFKA/KIP-6+-+New+reassignment+partition+logic+for+re-balancing

JIRA https://issues.apache.org/jira/browse/KAFKA-1792

While going through the KIP I thought of one thing from the JIRA that we
should change. We should preserve --generate to be existing functionality
for the next release it is in. If folks want to use --re-balance then
great, it just won't break any upgrade paths, yet.

/***
 Joe Stein
 Founder, Principal Consultant
 Big Data Open Source Security LLC
 http://www.stealth.ly
 Twitter: @allthingshadoop http://www.twitter.com/allthingshadoop
/


[DISCUSS] KIP-8 - Decommission a broker

2015-01-21 Thread Joe Stein
Hi, created a KIP
https://cwiki.apache.org/confluence/display/KAFKA/KIP-8+-+Decommission+a+broker

JIRA related https://issues.apache.org/jira/browse/KAFKA-1753

I took out the compatibility, migration section since this is new behavior.
If anyone can think of any we should add it back in.

/***
 Joe Stein
 Founder, Principal Consultant
 Big Data Open Source Security LLC
 http://www.stealth.ly
 Twitter: @allthingshadoop http://www.twitter.com/allthingshadoop
/


[jira] [Commented] (KAFKA-1848) Checking shutdown during each iteration of ZookeeperConsumerConnector

2015-01-21 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14286734#comment-14286734
 ] 

Guozhang Wang commented on KAFKA-1848:
--

Pushed to trunk, thanks.

 Checking shutdown during each iteration of ZookeeperConsumerConnector
 -

 Key: KAFKA-1848
 URL: https://issues.apache.org/jira/browse/KAFKA-1848
 Project: Kafka
  Issue Type: Bug
Reporter: Aditya Auradkar
Assignee: Aditya Auradkar
 Fix For: 0.9.0


 In ZookeeperConsumerConnector the syncedRebalance() method checks the 
 isShuttingDown flag before it triggers a rebalance. However, it does not 
 recheck the same value between successive retries which is possible if the 
 consumer is shutting down.
 This acquires the rebalanceLock and blocks shutdown from completing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-1848) Checking shutdown during each iteration of ZookeeperConsumerConnector

2015-01-21 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-1848.
--
Resolution: Fixed

 Checking shutdown during each iteration of ZookeeperConsumerConnector
 -

 Key: KAFKA-1848
 URL: https://issues.apache.org/jira/browse/KAFKA-1848
 Project: Kafka
  Issue Type: Bug
Reporter: Aditya Auradkar
Assignee: Aditya Auradkar
 Fix For: 0.9.0


 In ZookeeperConsumerConnector the syncedRebalance() method checks the 
 isShuttingDown flag before it triggers a rebalance. However, it does not 
 recheck the same value between successive retries which is possible if the 
 consumer is shutting down.
 This acquires the rebalanceLock and blocks shutdown from completing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : Kafka-trunk #376

2015-01-21 Thread Apache Jenkins Server
See https://builds.apache.org/job/Kafka-trunk/376/changes



Re: Review Request 27799: New consumer

2015-01-21 Thread Jaikiran Pai

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27799/#review69121
---



clients/src/main/java/org/apache/kafka/clients/NetworkClient.java
https://reviews.apache.org/r/27799/#comment113741

Hi Jay,

I think doing this unmuteAll in a finally block might be a good idea, since 
that way we don't end up with a muted selected when/if something goes wrong 
during that polling.


- Jaikiran Pai


On Jan. 21, 2015, 4:47 p.m., Jay Kreps wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/27799/
 ---
 
 (Updated Jan. 21, 2015, 4:47 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1760
 https://issues.apache.org/jira/browse/KAFKA-1760
 
 
 Repository: kafka
 
 
 Description
 ---
 
 New consumer.
 
 Addressed the first round of comments.
 
 
 Diffs
 -
 
   build.gradle c9ac43378c3bf5443f0f47c8ba76067237ecb348 
   clients/src/main/java/org/apache/kafka/clients/ClientRequest.java 
 d32c319d8ee4c46dad309ea54b136ea9798e2fd7 
   clients/src/main/java/org/apache/kafka/clients/ClusterConnectionStates.java 
 8aece7e81a804b177a6f2c12e2dc6c89c1613262 
   clients/src/main/java/org/apache/kafka/clients/CommonClientConfigs.java 
 PRE-CREATION 
   clients/src/main/java/org/apache/kafka/clients/ConnectionState.java 
 ab7e3220f9b76b92ef981d695299656f041ad5ed 
   clients/src/main/java/org/apache/kafka/clients/KafkaClient.java 
 397695568d3fd8e835d8f923a89b3b00c96d0ead 
   clients/src/main/java/org/apache/kafka/clients/NetworkClient.java 
 6746275d0b2596cd6ff7ce464a3a8225ad75ef00 
   
 clients/src/main/java/org/apache/kafka/clients/RequestCompletionHandler.java 
 PRE-CREATION 
   clients/src/main/java/org/apache/kafka/clients/consumer/CommitType.java 
 PRE-CREATION 
   clients/src/main/java/org/apache/kafka/clients/consumer/Consumer.java 
 c0c636b3e1ba213033db6d23655032c9bbd5e378 
   clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java 
 57c1807ccba9f264186f83e91f37c34b959c8060 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRebalanceCallback.java
  e4cf7d1cfa01c2844b53213a7b539cdcbcbeaa3a 
   clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRecord.java 
 16af70a5de52cca786fdea147a6a639b7dc4a311 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRecords.java 
 bdf4b26942d5a8c8a9503e05908e9a9eff6228a7 
   clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
 76efc216c9e6c3ab084461d792877092a189ad0f 
   clients/src/main/java/org/apache/kafka/clients/consumer/MockConsumer.java 
 fa88ac1a8b19b4294f211c4467fe68c7707ddbae 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/NoOffsetForPartitionException.java
  PRE-CREATION 
   clients/src/main/java/org/apache/kafka/clients/consumer/OffsetMetadata.java 
 ea423ad15eebd262d20d5ec05d592cc115229177 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/internals/Heartbeat.java
  PRE-CREATION 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/internals/NoOpConsumerRebalanceCallback.java
  PRE-CREATION 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/internals/SubscriptionState.java
  PRE-CREATION 
   clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
 fc71710dd5997576d3841a1c3b0f7e19a8c9698e 
   clients/src/main/java/org/apache/kafka/clients/producer/MockProducer.java 
 904976fadf0610982958628eaee810b60a98d725 
   clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
 8b3e565edd1ae04d8d34bd9f1a41e9fa8c880a75 
   
 clients/src/main/java/org/apache/kafka/clients/producer/internals/Metadata.java
  dcf46581b912cfb1b5c8d4cbc293d2d1444b7740 
   
 clients/src/main/java/org/apache/kafka/clients/producer/internals/Partitioner.java
  483899d2e69b33655d0e08949f5f64af2519660a 
   
 clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java 
 ccc03d8447ebba40131a70e16969686ac4aab58a 
   clients/src/main/java/org/apache/kafka/common/Cluster.java 
 d3299b944062d96852452de455902659ad8af757 
   clients/src/main/java/org/apache/kafka/common/PartitionInfo.java 
 b15aa2c3ef2d7c4b24618ff42fd4da324237a813 
   clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java 
 98cb79b701918eca3f6ca9823b6c7b7c97b3ecec 
   clients/src/main/java/org/apache/kafka/common/errors/ApiException.java 
 7c948b166a8ac07616809f260754116ae7764973 
   clients/src/main/java/org/apache/kafka/common/network/Selectable.java 
 b68bbf00ab8eba6c5867d346c91188142593ca6e 
   clients/src/main/java/org/apache/kafka/common/network/Selector.java 
 74d695ba39de44b6a3d15340ec0114bc4fce2ba2 
   clients/src/main/java/org/apache/kafka/common/protocol/Errors.java 
 

[jira] [Commented] (KAFKA-1835) Kafka new producer needs options to make blocking behavior explicit

2015-01-21 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14286967#comment-14286967
 ] 

Ewen Cheslack-Postava commented on KAFKA-1835:
--

[~ppearcy] these are easier to review if they're on reviewboard -- might help 
to use the patch submission tool in the future. Here are some notes on the 
current patch:

KafkaProducer.java
* No need to use the object forms of primitive types, change Boolean - 
boolean, Long - long, etc.
* initialized should be an AtomicBoolean or volatile boolean since it's 
read/written from different threads
* Error handling when waiting for the Future to finish seems wrong -- if there 
is an exception, we probably want to pass it along/throw another one to 
indicate the problem to the caller. Currently it just falls through and then 
only throws an exception when send() is called, so the error ends up 
disconnected from the source of the problem. It seems like it would be better 
to just handle the error immediately.
* Similarly, I don't think send() should check initialized if preinitialization 
is handled in the constructor -- if failure to preinitialize also threw an 
exception, then it would be impossible to call send() unless preinitialization 
was complete.
* If you follow the above approach, you can avoid making initialized a field in 
the class. It would only need to be a local variable since it would only be 
used in the constructor.
* Do we even need the ExecutorService? Since the thread creating the producer 
is going to block by calling Future.get(), what does having the executor 
accomplish?
* initializeProducer() doesn't need a return value since only ever returns true.

ProducerConfig.java
* Config has a getList() method and ConfigDef has a LIST type. Use those for 
pre.initialize.topics instead of parsing the list yourself.
* I think the docstrings could be better, e.g.:
pre.initialize.topics: List of topics to preload metadata for when creating 
the producer so subsequent calls to send are guaranteed not to block. If 
metadata for these topics cannot be loaded within 
codepre.initialize.timeout.ms/code milliseconds, the producer constructor 
will throw an exception.
pre.initialize.timeout.ms:  The producer blocks when sending the first message 
to a topic if metadata is not yet available for that topic. When this 
configuration is greater than 0, metadata for the topics specified by 
codepre.initialize.topics/code are prefetched during construction, throwing 
an exception after codepre.initialize.timeout.ms/code milliseconds if the 
metadata has not been populated.

 Kafka new producer needs options to make blocking behavior explicit
 ---

 Key: KAFKA-1835
 URL: https://issues.apache.org/jira/browse/KAFKA-1835
 Project: Kafka
  Issue Type: Improvement
  Components: clients
Affects Versions: 0.8.2, 0.8.3, 0.9.0
Reporter: Paul Pearcy
 Fix For: 0.8.2

 Attachments: KAFKA-1835-New-producer--blocking_v0.patch

   Original Estimate: 504h
  Remaining Estimate: 504h

 The new (0.8.2 standalone) producer will block the first time it attempts to 
 retrieve metadata for a topic. This is not the desired behavior in some use 
 cases where async non-blocking guarantees are required and message loss is 
 acceptable in known cases. Also, most developers will assume an API that 
 returns a future is safe to call in a critical request path. 
 Discussing on the mailing list, the most viable option is to have the 
 following settings:
  pre.initialize.topics=x,y,z
  pre.initialize.timeout=x
  
 This moves potential blocking to the init of the producer and outside of some 
 random request. The potential will still exist for blocking in a corner case 
 where connectivity with Kafka is lost and a topic not included in pre-init 
 has a message sent for the first time. 
 There is the question of what to do when initialization fails. There are a 
 couple of options that I'd like available:
 - Fail creation of the client 
 - Fail all sends until the meta is available 
 Open to input on how the above option should be expressed. 
 It is also worth noting more nuanced solutions exist that could work without 
 the extra settings, they just end up having extra complications and at the 
 end of the day not adding much value. For instance, the producer could accept 
 and queue messages(note: more complicated than I am making it sound due to 
 storing all accepted messages in pre-partitioned compact binary form), but 
 you're still going to be forced to choose to either start blocking or 
 dropping messages at some point. 
 I have some test cases I am going to port over to the Kafka producer 
 integration ones and start from there. My current impl is in scala, but 
 porting to Java shouldn't be a big deal (was 

[DISCUSS] KIP-5 - Broker Configuration Management

2015-01-21 Thread Joe Stein
Created a KIP
https://cwiki.apache.org/confluence/display/KAFKA/KIP-5+-+Broker+Configuration+Management

JIRA https://issues.apache.org/jira/browse/KAFKA-1786

/***
 Joe Stein
 Founder, Principal Consultant
 Big Data Open Source Security LLC
 http://www.stealth.ly
 Twitter: @allthingshadoop http://www.twitter.com/allthingshadoop
/


[DISCUSS] KIP-4 - Command line and centralized administrative operations

2015-01-21 Thread Joe Stein
Hi, created a KIP
https://cwiki.apache.org/confluence/display/KAFKA/KIP-4+-+Command+line+and+centralized+administrative+operations

JIRA https://issues.apache.org/jira/browse/KAFKA-1694

/***
 Joe Stein
 Founder, Principal Consultant
 Big Data Open Source Security LLC
 http://www.stealth.ly
 Twitter: @allthingshadoop http://www.twitter.com/allthingshadoop
/


Re: NIO and Threading implementation

2015-01-21 Thread Chittaranjan Hota
Thanks for your comments Jay.

Quote Technically startup is not called from
multiple threads but the classes correctness should not depended on the
current usage so it should work correctly if it were. -- If this were a
requirement then one can see that many methods are NOT thread safe while
the start up happens. If we need to stick to the goal of exposing kafka
initialization by other Parents, few things have to change. Nevertheless am
currently doing some changes on my local copy and once I see how things
look will sync back with you.

For the other couple of things (removed wake up and also added awaits
correctly) i have done the changes locally and deployed to our stage
cluster (3 brokers, 3 zk nodes) and did some load tests today.

Not sure if i understood what single threaded selector loop means and
also the locking in selector loops, I would love to have a conversation
with you around this.

Thanks again  ..




On Wed, Jan 21, 2015 at 2:15 PM, Jay Kreps jay.kr...@gmail.com wrote:

 1. a. I think startup is a public method on KafkaServer so for people
 embedding Kafka in some way this helps guarantee correctness.
 b. I think KafkaScheduler tries to be a bit too clever, there is a patch
 out there that just moves to global synchronization for the whole class
 which is easier to reason about. Technically startup is not called from
 multiple threads but the classes correctness should not depended on the
 current usage so it should work correctly if it were.
 c. I think in cases where you actually just want to start and run N
 threads, using Thread directly is sensible. ExecutorService is useful but
 does have a ton of gadgets and gizmos that obscure the basic usage in that
 case.
 d. Yeah we should probably wait until the processor threads start as well.
 I think it probably doesn't cause misbehavior as is, but it would be better
 if the postcondition of startup was that all threads had started.

 2. a. There are different ways to do this. My overwhelming experience has
 been that any attempt to share a selector across threads is very painful.
 Making the selector loops single threaded just really really simplifies
 everything, but also the performance tends to be a lot better because there
 is far less locking inside that selector loop.
 b. Yeah I share you skepticism of that call. I'm not sure why it is there
 or if it is needed. I agree that wakeup should only be needed from other
 threads. It would be good to untangle that mystery. I wonder what happens
 if it is removed.

 -Jay

 On Wed, Jan 21, 2015 at 1:58 PM, Chittaranjan Hota chitts.h...@gmail.com
 wrote:

  Hello,
  Congratulations to the folks behind kafka. Its has been a smooth ride
  dealing with multi TB data when the same set up in JMS fell apart often.
 
  Although I have been using kafka for more than a few days now, started
  looking into the code base since yesterday and already have doubts at the
  very beginning. Would need some inputs on why the implementation is done
  the way it is.
 
  Version : 0.8.1
 
  THREADING RELATED
  1. Why in the start up code synchronized? Who are the competing threads?
  a. startReporters func is synchronized
  b. KafkaScheduler startup is synchronized? There is also a volatile
  variable declared when the whole synchronized block is itself
 guaranteeing
  happens before.
 c. Use of native new Thread syntax instead of relying on Executor
  service
 d. processor thread uses a couthdownlatch but main thread doesnt await
  for processors to signal that startup is complete.
 
 
  NIO RELATED
  2.
 a. Acceptor, and each Processor thread have their own selector (since
  they are extending from abstract class AbstractServerThread). Ideally a
  single selector suffices multiplexing. Is there any reason why multiple
  selectors are used?
 b. selector wake up calls by Processors in the read method (line 362
  SocketServer.scala) are MISSED calls since there is no thread waiting on
  the select at that point.
 
  Looking forward to learning the code further!
  Thanks in advance.
 
  Regards,
  Chitta
 



Re: [KIP-DISCUSSION] Mirror Maker Enhancement

2015-01-21 Thread Jiangjie Qin
Hi Jay,

Thanks for comments. Please see inline responses.

Jiangjie (Becket) Qin

On 1/21/15, 1:33 PM, Jay Kreps jay.kr...@gmail.com wrote:

Hey guys,

A couple questions/comments:

1. The callback and user-controlled commit offset functionality is already
in the new consumer which we are working on in parallel. If we accelerated
that work it might help concentrate efforts. I admit this might take
slightly longer in calendar time but could still probably get done this
quarter. Have you guys considered that approach?
Yes, I totally agree that ideally we should put efforts on new consumer.
The main reason for still working on the old consumer is that we expect it
would still be used in LinkedIn for quite a while before the new consumer
could be fully rolled out. And we recently suffering a lot from mirror
maker data loss issue. So our current plan is making necessary changes to
make current mirror maker stable in production. Then we can test and
rollout new consumer gradually without getting burnt.

2. I think partitioning on the hash of the topic partition is not a very
good idea because that will make the case of going from a cluster with
fewer partitions to one with more partitions not work. I think an
intuitive
way to do this would be the following:
a. Default behavior: Just do what the producer does. I.e. if you specify a
key use it for partitioning, if not just partition in a round-robin
fashion.
b. Add a --preserve-partition option that will explicitly inherent the
partition from the source irrespective of whether there is a key or which
partition that key would hash to.
Sorry that I did not explain this clear enough. The hash of topic
partition is only used when decide which mirror maker data channel queue
the consumer thread should put message into. It only tries to make sure
the messages from the same partition is sent by the same producer thread
to guarantee the sending order. This is not at all related to which
partition in target cluster the messages end up. That is still decided by
producer.

3. You don't actually give the ConsumerRebalanceListener interface. What
is
that going to look like?
Good point! I should have put it in the wiki. I just added it.

4. What is MirrorMakerRecord? I think ideally the
MirrorMakerMessageHandler
interface would take a ConsumerRecord as input and return a
ProducerRecord,
right? That would allow you to transform the key, value, partition, or
destination topic...
MirrorMakerRecord is introduced in KAFKA-1650, which is exactly the same
as ConsumerRecord in KAFKA-1760.
private[kafka] class MirrorMakerRecord (val sourceTopic: String,
  val sourcePartition: Int,
  val sourceOffset: Long,
  val key: Array[Byte],
  val value: Array[Byte]) {
  def size = value.length + {if (key == null) 0 else key.length}
}

However, because source partition and offset is needed in producer thread
for consumer offsets bookkeeping, the record returned by
MirrorMakerMessageHandler needs to contain those information. Therefore
ProducerRecord does not work here. We could probably let message handler
take ConsumerRecord for both input and output.

5. Have you guys thought about what the implementation will look like in
terms of threading architecture etc with the new consumer? That will be
soon so even if we aren't starting with that let's make sure we can get
rid
of a lot of the current mirror maker accidental complexity in terms of
threads and queues when we move to that.
I haven¹t thought about it throughly. The quick idea is after migration to
the new consumer, it is probably better to use a single consumer thread.
If multithread is needed, decoupling consumption and processing might be
used. MirrorMaker definitely needs to be changed after new consumer get
checked in. I¹ll document the changes and can submit follow up patches
after the new consumer is available.

-Jay

On Tue, Jan 20, 2015 at 4:31 PM, Jiangjie Qin j...@linkedin.com.invalid
wrote:

 Hi Kafka Devs,

 We are working on Kafka Mirror Maker enhancement. A KIP is posted to
 document and discuss on the followings:
 1. KAFKA-1650: No Data loss mirror maker change
 2. KAFKA-1839: To allow partition aware mirror.
 3. KAFKA-1840: To allow message filtering/format conversion
 Feedbacks are welcome. Please let us know if you have any questions or
 concerns.

 Thanks.

 Jiangjie (Becket) Qin




[KIP-DISCUSSION] KIP-7 Security - IP Filtering

2015-01-21 Thread Jeff Holoman
Posted a KIP for IP Filtering:

https://cwiki.apache.org/confluence/display/KAFKA/KIP-7+-+Security+-+IP+Filtering

Relevant JIRA:
https://issues.apache.org/jira/browse/KAFKA-1810

Appreciate any feedback.

Thanks

Jeff


[jira] [Commented] (KAFKA-1835) Kafka new producer needs options to make blocking behavior explicit

2015-01-21 Thread Paul Pearcy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14286816#comment-14286816
 ] 

Paul Pearcy commented on KAFKA-1835:


Do I need to do anything else to get this in the review pipeline? 

 Kafka new producer needs options to make blocking behavior explicit
 ---

 Key: KAFKA-1835
 URL: https://issues.apache.org/jira/browse/KAFKA-1835
 Project: Kafka
  Issue Type: Improvement
  Components: clients
Affects Versions: 0.8.2, 0.8.3, 0.9.0
Reporter: Paul Pearcy
 Fix For: 0.8.2

 Attachments: KAFKA-1835-New-producer--blocking_v0.patch

   Original Estimate: 504h
  Remaining Estimate: 504h

 The new (0.8.2 standalone) producer will block the first time it attempts to 
 retrieve metadata for a topic. This is not the desired behavior in some use 
 cases where async non-blocking guarantees are required and message loss is 
 acceptable in known cases. Also, most developers will assume an API that 
 returns a future is safe to call in a critical request path. 
 Discussing on the mailing list, the most viable option is to have the 
 following settings:
  pre.initialize.topics=x,y,z
  pre.initialize.timeout=x
  
 This moves potential blocking to the init of the producer and outside of some 
 random request. The potential will still exist for blocking in a corner case 
 where connectivity with Kafka is lost and a topic not included in pre-init 
 has a message sent for the first time. 
 There is the question of what to do when initialization fails. There are a 
 couple of options that I'd like available:
 - Fail creation of the client 
 - Fail all sends until the meta is available 
 Open to input on how the above option should be expressed. 
 It is also worth noting more nuanced solutions exist that could work without 
 the extra settings, they just end up having extra complications and at the 
 end of the day not adding much value. For instance, the producer could accept 
 and queue messages(note: more complicated than I am making it sound due to 
 storing all accepted messages in pre-partitioned compact binary form), but 
 you're still going to be forced to choose to either start blocking or 
 dropping messages at some point. 
 I have some test cases I am going to port over to the Kafka producer 
 integration ones and start from there. My current impl is in scala, but 
 porting to Java shouldn't be a big deal (was using a promise to track init 
 status, but will likely need to make that an atomic bool). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1810) Add IP Filtering / Whitelists-Blacklists

2015-01-21 Thread Jeff Holoman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14286741#comment-14286741
 ] 

Jeff Holoman commented on KAFKA-1810:
-

The current plan is to rework the configuration portion of this patch once 
KAFKA-1845 is committed (ConfigDef)

 Add IP Filtering / Whitelists-Blacklists 
 -

 Key: KAFKA-1810
 URL: https://issues.apache.org/jira/browse/KAFKA-1810
 Project: Kafka
  Issue Type: New Feature
  Components: core, network
Reporter: Jeff Holoman
Assignee: Jeff Holoman
Priority: Minor
 Fix For: 0.8.3

 Attachments: KAFKA-1810.patch, KAFKA-1810_2015-01-15_19:47:14.patch


 While longer-term goals of security in Kafka are on the roadmap there exists 
 some value for the ability to restrict connection to Kafka brokers based on 
 IP address. This is not intended as a replacement for security but more of a 
 precaution against misconfiguration and to provide some level of control to 
 Kafka administrators about who is reading/writing to their cluster.
 1) In some organizations software administration vs o/s systems 
 administration and network administration is disjointed and not well 
 choreographed. Providing software administrators the ability to configure 
 their platform relatively independently (after initial configuration) from 
 Systems administrators is desirable.
 2) Configuration and deployment is sometimes error prone and there are 
 situations when test environments could erroneously read/write to production 
 environments
 3) An additional precaution against reading sensitive data is typically 
 welcomed in most large enterprise deployments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1634) Improve semantics of timestamp in OffsetCommitRequests and update documentation

2015-01-21 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14286763#comment-14286763
 ] 

Jun Rao commented on KAFKA-1634:


I guess you mean rebasing KAFAK-1841, instead of KAFKA-1481?

 Improve semantics of timestamp in OffsetCommitRequests and update 
 documentation
 ---

 Key: KAFKA-1634
 URL: https://issues.apache.org/jira/browse/KAFKA-1634
 Project: Kafka
  Issue Type: Bug
Reporter: Neha Narkhede
Assignee: Guozhang Wang
Priority: Blocker
 Fix For: 0.8.3

 Attachments: KAFKA-1634.patch, KAFKA-1634_2014-11-06_15:35:46.patch, 
 KAFKA-1634_2014-11-07_16:54:33.patch, KAFKA-1634_2014-11-17_17:42:44.patch, 
 KAFKA-1634_2014-11-21_14:00:34.patch, KAFKA-1634_2014-12-01_11:44:35.patch, 
 KAFKA-1634_2014-12-01_18:03:12.patch, KAFKA-1634_2015-01-14_15:50:15.patch, 
 KAFKA-1634_2015-01-21_16:43:01.patch


 From the mailing list -
 following up on this -- I think the online API docs for OffsetCommitRequest
 still incorrectly refer to client-side timestamps:
 https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol#AGuideToTheKafkaProtocol-OffsetCommitRequest
 Wasn't that removed and now always handled server-side now?  Would one of
 the devs mind updating the API spec wiki?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 27391: Fix KAFKA-1634

2015-01-21 Thread Jun Rao

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27391/#review69106
---


Thanks for the patch. A few more comments.


clients/src/main/java/org/apache/kafka/common/requests/OffsetCommitRequest.java
https://reviews.apache.org/r/27391/#comment113727

Would it be better to use -1L as the default retention time? MAX_VALUE 
could be useful for the case when a client wants the offset never to be expired.



core/src/main/scala/kafka/api/OffsetCommitRequest.scala
https://reviews.apache.org/r/27391/#comment113724

It seems that our coding convention has been not to use {} on a single line 
in the body. So, we use
if ()
  do sth
instead of 
if () {
  do sth
}



core/src/main/scala/kafka/server/KafkaApis.scala
https://reviews.apache.org/r/27391/#comment113730

I am not sure that we should change the timestamp for offsets produced in 
V0 and V1. There could be data in the offset topic already written by 0.8.2 
code. See the other comment in OffsetManager on expiring.



core/src/main/scala/kafka/server/OffsetManager.scala
https://reviews.apache.org/r/27391/#comment113729

Does that change work correctly with offsets already stored in v0 and v1 
format using 0.8.2 code? Would those offsets still be expired at the right time?


- Jun Rao


On Jan. 22, 2015, 12:43 a.m., Guozhang Wang wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/27391/
 ---
 
 (Updated Jan. 22, 2015, 12:43 a.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1634
 https://issues.apache.org/jira/browse/KAFKA-1634
 
 
 Repository: kafka
 
 
 Description
 ---
 
 Incorporated Joel's comments
 
 
 Diffs
 -
 
   clients/src/main/java/org/apache/kafka/common/protocol/Protocol.java 
 7517b879866fc5dad5f8d8ad30636da8bbe7784a 
   clients/src/main/java/org/apache/kafka/common/protocol/types/Struct.java 
 121e880a941fcd3e6392859edba11a94236494cc 
   
 clients/src/main/java/org/apache/kafka/common/requests/OffsetCommitRequest.java
  3ee5cbad55ce836fd04bb954dcf6ef2f9bc3288f 
   
 clients/src/test/java/org/apache/kafka/common/requests/RequestResponseTest.java
  df37fc6d8f0db0b8192a948426af603be3444da4 
   core/src/main/scala/kafka/api/OffsetCommitRequest.scala 
 050615c72efe7dbaa4634f53943bd73273d20ffb 
   core/src/main/scala/kafka/api/OffsetFetchRequest.scala 
 c7604b9cdeb8f46507f04ed7d35fcb3d5f150713 
   core/src/main/scala/kafka/common/OffsetMetadataAndError.scala 
 4cabffeacea09a49913505db19a96a55d58c0909 
   core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
 191a8677444e53b043e9ad6e94c5a9191c32599e 
   core/src/main/scala/kafka/server/KafkaApis.scala 
 ec8d9f7ba44741db40875458bd524c4062ad6a26 
   core/src/main/scala/kafka/server/KafkaConfig.scala 
 6d74983472249eac808d361344c58cc2858ec971 
   core/src/main/scala/kafka/server/KafkaServer.scala 
 89200da30a04943f0b9befe84ab17e62b747c8c4 
   core/src/main/scala/kafka/server/OffsetManager.scala 
 0bdd42fea931cddd072c0fff765b10526db6840a 
   core/src/main/scala/kafka/server/ReplicaManager.scala 
 e58fbb922e93b0c31dff04f187fcadb4ece986d7 
   core/src/test/scala/unit/kafka/api/RequestResponseSerializationTest.scala 
 cd16ced5465d098be7a60498326b2a98c248f343 
   core/src/test/scala/unit/kafka/server/OffsetCommitTest.scala 
 5b93239cdc26b5be7696f4e7863adb9fbe5f0ed5 
   core/src/test/scala/unit/kafka/server/ServerShutdownTest.scala 
 ba1e48e4300c9fb32e36e7266cb05294f2a481e5 
 
 Diff: https://reviews.apache.org/r/27391/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Guozhang Wang
 




Re: Review Request 27799: New consumer

2015-01-21 Thread Jaikiran Pai


 On Jan. 22, 2015, 3:14 a.m., Jaikiran Pai wrote:
  clients/src/main/java/org/apache/kafka/clients/NetworkClient.java, line 253
  https://reviews.apache.org/r/27799/diff/4/?file=828376#file828376line253
 
  Hi Jay,
  
  I think doing this unmuteAll in a finally block might be a good idea, 
  since that way we don't end up with a muted selected when/if something goes 
  wrong during that polling.

Typo in my previous comment. Should have been ... since that way we don't end 
up with a muted selector


- Jaikiran


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27799/#review69121
---


On Jan. 21, 2015, 4:47 p.m., Jay Kreps wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/27799/
 ---
 
 (Updated Jan. 21, 2015, 4:47 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1760
 https://issues.apache.org/jira/browse/KAFKA-1760
 
 
 Repository: kafka
 
 
 Description
 ---
 
 New consumer.
 
 Addressed the first round of comments.
 
 
 Diffs
 -
 
   build.gradle c9ac43378c3bf5443f0f47c8ba76067237ecb348 
   clients/src/main/java/org/apache/kafka/clients/ClientRequest.java 
 d32c319d8ee4c46dad309ea54b136ea9798e2fd7 
   clients/src/main/java/org/apache/kafka/clients/ClusterConnectionStates.java 
 8aece7e81a804b177a6f2c12e2dc6c89c1613262 
   clients/src/main/java/org/apache/kafka/clients/CommonClientConfigs.java 
 PRE-CREATION 
   clients/src/main/java/org/apache/kafka/clients/ConnectionState.java 
 ab7e3220f9b76b92ef981d695299656f041ad5ed 
   clients/src/main/java/org/apache/kafka/clients/KafkaClient.java 
 397695568d3fd8e835d8f923a89b3b00c96d0ead 
   clients/src/main/java/org/apache/kafka/clients/NetworkClient.java 
 6746275d0b2596cd6ff7ce464a3a8225ad75ef00 
   
 clients/src/main/java/org/apache/kafka/clients/RequestCompletionHandler.java 
 PRE-CREATION 
   clients/src/main/java/org/apache/kafka/clients/consumer/CommitType.java 
 PRE-CREATION 
   clients/src/main/java/org/apache/kafka/clients/consumer/Consumer.java 
 c0c636b3e1ba213033db6d23655032c9bbd5e378 
   clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java 
 57c1807ccba9f264186f83e91f37c34b959c8060 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRebalanceCallback.java
  e4cf7d1cfa01c2844b53213a7b539cdcbcbeaa3a 
   clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRecord.java 
 16af70a5de52cca786fdea147a6a639b7dc4a311 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRecords.java 
 bdf4b26942d5a8c8a9503e05908e9a9eff6228a7 
   clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
 76efc216c9e6c3ab084461d792877092a189ad0f 
   clients/src/main/java/org/apache/kafka/clients/consumer/MockConsumer.java 
 fa88ac1a8b19b4294f211c4467fe68c7707ddbae 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/NoOffsetForPartitionException.java
  PRE-CREATION 
   clients/src/main/java/org/apache/kafka/clients/consumer/OffsetMetadata.java 
 ea423ad15eebd262d20d5ec05d592cc115229177 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/internals/Heartbeat.java
  PRE-CREATION 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/internals/NoOpConsumerRebalanceCallback.java
  PRE-CREATION 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/internals/SubscriptionState.java
  PRE-CREATION 
   clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
 fc71710dd5997576d3841a1c3b0f7e19a8c9698e 
   clients/src/main/java/org/apache/kafka/clients/producer/MockProducer.java 
 904976fadf0610982958628eaee810b60a98d725 
   clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
 8b3e565edd1ae04d8d34bd9f1a41e9fa8c880a75 
   
 clients/src/main/java/org/apache/kafka/clients/producer/internals/Metadata.java
  dcf46581b912cfb1b5c8d4cbc293d2d1444b7740 
   
 clients/src/main/java/org/apache/kafka/clients/producer/internals/Partitioner.java
  483899d2e69b33655d0e08949f5f64af2519660a 
   
 clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java 
 ccc03d8447ebba40131a70e16969686ac4aab58a 
   clients/src/main/java/org/apache/kafka/common/Cluster.java 
 d3299b944062d96852452de455902659ad8af757 
   clients/src/main/java/org/apache/kafka/common/PartitionInfo.java 
 b15aa2c3ef2d7c4b24618ff42fd4da324237a813 
   clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java 
 98cb79b701918eca3f6ca9823b6c7b7c97b3ecec 
   clients/src/main/java/org/apache/kafka/common/errors/ApiException.java 
 7c948b166a8ac07616809f260754116ae7764973 
   clients/src/main/java/org/apache/kafka/common/network/Selectable.java 
 b68bbf00ab8eba6c5867d346c91188142593ca6e 
   

Re: [DISCUSS] KIPs

2015-01-21 Thread Gwen Shapira
Good point :)

I added the specifics of the new  UpdateMetadataRequest, which is the
only protocol bump in this change.

Highlighted the broker and producer/consumer configuration changes,
added some example values and added the new zookeeper json.

Hope this makes things clearer.

On Wed, Jan 21, 2015 at 2:19 PM, Jay Kreps jay.kr...@gmail.com wrote:
 Hey Gwen,

 Could we get the actual changes in that KIP? I.e. changes to metadata
 request, changes to UpdateMetadataRequest, new configs and what will their
 valid values be, etc. This kind of says that those things will change but
 doesn't say what they will change to...

 -Jay

 On Mon, Jan 19, 2015 at 9:45 PM, Gwen Shapira gshap...@cloudera.com wrote:

 I created a KIP for the multi-port broker change.

 https://cwiki.apache.org/confluence/display/KAFKA/KIP-2+-+Refactor+brokers+to+allow+listening+on+multiple+ports+and+IPs

 I'm not re-opening the discussion, since it was agreed on over a month
 ago and implementation is close to complete (I hope!). Lets consider
 this voted and accepted?

 Gwen

 On Sun, Jan 18, 2015 at 10:31 AM, Jay Kreps jay.kr...@gmail.com wrote:
  Great! Sounds like everyone is on the same page
 
 - I created a template page to make things easier. If you do
 Tools-Copy
 on this page you can just fill in the italic portions with your
 details.
 - I retrofitted KIP-1 to match this formatting
 - I added the metadata section people asked for (a link to the
 discussion, the JIRA, and the current status). Let's make sure we
 remember
 to update the current status as things are figured out.
 - Let's keep the discussion on the mailing list rather than on the
 wiki
 pages. It makes sense to do one or the other so all the comments are
 in one
 place and I think prior experience is that the wiki comments are the
 worse
 way.
 
  I think it would be great do KIPs for some of the in-flight items folks
  mentioned.
 
  -Jay
 
  On Sat, Jan 17, 2015 at 8:23 AM, Gwen Shapira gshap...@cloudera.com
 wrote:
 
  +1
 
  Will be happy to provide a KIP for the multiple-listeners patch.
 
  Gwen
 
  On Sat, Jan 17, 2015 at 8:10 AM, Joe Stein joe.st...@stealth.ly
 wrote:
   +1 to everything we have been saying and where this (has settled
 to)/(is
   settling to).
  
   I am sure other folks have some more feedback and think we should try
 to
   keep this discussion going if need be. I am also a firm believer of
 form
   following function so kicking the tires some to flesh out the details
 of
   this and have some organic growth with the process will be healthy and
   beneficial to the community.
  
   For my part, what I will do is open a few KIP based on some of the
 work I
   have been involved with for 0.8.3. Off the top of my head this would
   include 1) changes to re-assignment of partitions 2) kafka cli 3)
 global
   configs 4) security white list black list by ip 5) SSL 6) We probably
  will
   have lots of Security related KIPs and should treat them all
 individually
   when the time is appropriate 7) Kafka on Mesos.
  
   If someone else wants to jump in to start getting some of the security
  KIP
   that we are going to have in 0.8.3 I think that would be great (e.g.
   Multiple Listeners for Kafka Brokers). There are also a few other
  tickets I
   can think of that are important to have in the code in 0.8.3 that
 should
   have KIP also that I haven't really been involved in. I will take a
 few
   minutes and go through JIRA (one I can think of like auto assign id
 that
  is
   already committed I think) and ask for a KIP if appropriate or if I
 feel
   that I can write it up (both from a time and understanding
 perspective)
  do
   so.
  
   long story short, I encourage folks to start moving ahead with the KIP
  for
   0.8.3 as how we operate. any objections?
  
   On Fri, Jan 16, 2015 at 2:40 PM, Guozhang Wang wangg...@gmail.com
  wrote:
  
   +1 on the idea, and we could mutually link the KIP wiki page with the
  the
   created JIRA ticket (i.e. include the JIRA number on the page and the
  KIP
   url on the ticket description).
  
   Regarding the KIP process, probably we do not need two phase
  communication
   of a [DISCUSS] followed by [VOTE], as Jay said the voting should be
  clear
   while people discuss about that.
  
   About who should trigger the process, I think the only involved
 people
   would be 1) when the patch is submitted / or even the ticket is
 created,
   the assignee could choose to start the KIP process if she thought it
 is
   necessary; 2) the reviewer of the patch can also suggest starting KIP
   discussions.
  
   On Fri, Jan 16, 2015 at 10:49 AM, Gwen Shapira 
 gshap...@cloudera.com
   wrote:
  
+1 to Ewen's suggestions: Deprecation, status and version.
   
Perhaps add the JIRA where the KIP was implemented to the metadata.
This will help tie things together.
   
On Fri, Jan 16, 2015 at 9:35 AM, Ewen Cheslack-Postava
e...@confluent.io wrote:
 I 

Re: Review Request 27799: New consumer

2015-01-21 Thread Onur Karaman

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27799/#review69117
---



clients/src/main/java/org/apache/kafka/common/requests/ConsumerMetadataRequest.java
https://reviews.apache.org/r/27799/#comment113735

CURRENT_SCHEMA is sometimes public and sometimes private across the 
different requests / responses in this rb. Are some of these planned to be 
accessed elsewhere?



clients/src/main/java/org/apache/kafka/common/requests/MetadataResponse.java
https://reviews.apache.org/r/27799/#comment113734

Other CURRENT_SCHEMA's throughout the rb were changed to be final.


- Onur Karaman


On Jan. 21, 2015, 4:47 p.m., Jay Kreps wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/27799/
 ---
 
 (Updated Jan. 21, 2015, 4:47 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1760
 https://issues.apache.org/jira/browse/KAFKA-1760
 
 
 Repository: kafka
 
 
 Description
 ---
 
 New consumer.
 
 Addressed the first round of comments.
 
 
 Diffs
 -
 
   build.gradle c9ac43378c3bf5443f0f47c8ba76067237ecb348 
   clients/src/main/java/org/apache/kafka/clients/ClientRequest.java 
 d32c319d8ee4c46dad309ea54b136ea9798e2fd7 
   clients/src/main/java/org/apache/kafka/clients/ClusterConnectionStates.java 
 8aece7e81a804b177a6f2c12e2dc6c89c1613262 
   clients/src/main/java/org/apache/kafka/clients/CommonClientConfigs.java 
 PRE-CREATION 
   clients/src/main/java/org/apache/kafka/clients/ConnectionState.java 
 ab7e3220f9b76b92ef981d695299656f041ad5ed 
   clients/src/main/java/org/apache/kafka/clients/KafkaClient.java 
 397695568d3fd8e835d8f923a89b3b00c96d0ead 
   clients/src/main/java/org/apache/kafka/clients/NetworkClient.java 
 6746275d0b2596cd6ff7ce464a3a8225ad75ef00 
   
 clients/src/main/java/org/apache/kafka/clients/RequestCompletionHandler.java 
 PRE-CREATION 
   clients/src/main/java/org/apache/kafka/clients/consumer/CommitType.java 
 PRE-CREATION 
   clients/src/main/java/org/apache/kafka/clients/consumer/Consumer.java 
 c0c636b3e1ba213033db6d23655032c9bbd5e378 
   clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java 
 57c1807ccba9f264186f83e91f37c34b959c8060 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRebalanceCallback.java
  e4cf7d1cfa01c2844b53213a7b539cdcbcbeaa3a 
   clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRecord.java 
 16af70a5de52cca786fdea147a6a639b7dc4a311 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRecords.java 
 bdf4b26942d5a8c8a9503e05908e9a9eff6228a7 
   clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
 76efc216c9e6c3ab084461d792877092a189ad0f 
   clients/src/main/java/org/apache/kafka/clients/consumer/MockConsumer.java 
 fa88ac1a8b19b4294f211c4467fe68c7707ddbae 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/NoOffsetForPartitionException.java
  PRE-CREATION 
   clients/src/main/java/org/apache/kafka/clients/consumer/OffsetMetadata.java 
 ea423ad15eebd262d20d5ec05d592cc115229177 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/internals/Heartbeat.java
  PRE-CREATION 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/internals/NoOpConsumerRebalanceCallback.java
  PRE-CREATION 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/internals/SubscriptionState.java
  PRE-CREATION 
   clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
 fc71710dd5997576d3841a1c3b0f7e19a8c9698e 
   clients/src/main/java/org/apache/kafka/clients/producer/MockProducer.java 
 904976fadf0610982958628eaee810b60a98d725 
   clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
 8b3e565edd1ae04d8d34bd9f1a41e9fa8c880a75 
   
 clients/src/main/java/org/apache/kafka/clients/producer/internals/Metadata.java
  dcf46581b912cfb1b5c8d4cbc293d2d1444b7740 
   
 clients/src/main/java/org/apache/kafka/clients/producer/internals/Partitioner.java
  483899d2e69b33655d0e08949f5f64af2519660a 
   
 clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java 
 ccc03d8447ebba40131a70e16969686ac4aab58a 
   clients/src/main/java/org/apache/kafka/common/Cluster.java 
 d3299b944062d96852452de455902659ad8af757 
   clients/src/main/java/org/apache/kafka/common/PartitionInfo.java 
 b15aa2c3ef2d7c4b24618ff42fd4da324237a813 
   clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java 
 98cb79b701918eca3f6ca9823b6c7b7c97b3ecec 
   clients/src/main/java/org/apache/kafka/common/errors/ApiException.java 
 7c948b166a8ac07616809f260754116ae7764973 
   clients/src/main/java/org/apache/kafka/common/network/Selectable.java 
 b68bbf00ab8eba6c5867d346c91188142593ca6e 
   

[jira] [Updated] (KAFKA-1835) Kafka new producer needs options to make blocking behavior explicit

2015-01-21 Thread Paul Pearcy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Pearcy updated KAFKA-1835:
---
Attachment: KAFKA-1835.patch

 Kafka new producer needs options to make blocking behavior explicit
 ---

 Key: KAFKA-1835
 URL: https://issues.apache.org/jira/browse/KAFKA-1835
 Project: Kafka
  Issue Type: Improvement
  Components: clients
Affects Versions: 0.8.2, 0.8.3, 0.9.0
Reporter: Paul Pearcy
 Fix For: 0.8.2

 Attachments: KAFKA-1835-New-producer--blocking_v0.patch, 
 KAFKA-1835.patch

   Original Estimate: 504h
  Remaining Estimate: 504h

 The new (0.8.2 standalone) producer will block the first time it attempts to 
 retrieve metadata for a topic. This is not the desired behavior in some use 
 cases where async non-blocking guarantees are required and message loss is 
 acceptable in known cases. Also, most developers will assume an API that 
 returns a future is safe to call in a critical request path. 
 Discussing on the mailing list, the most viable option is to have the 
 following settings:
  pre.initialize.topics=x,y,z
  pre.initialize.timeout=x
  
 This moves potential blocking to the init of the producer and outside of some 
 random request. The potential will still exist for blocking in a corner case 
 where connectivity with Kafka is lost and a topic not included in pre-init 
 has a message sent for the first time. 
 There is the question of what to do when initialization fails. There are a 
 couple of options that I'd like available:
 - Fail creation of the client 
 - Fail all sends until the meta is available 
 Open to input on how the above option should be expressed. 
 It is also worth noting more nuanced solutions exist that could work without 
 the extra settings, they just end up having extra complications and at the 
 end of the day not adding much value. For instance, the producer could accept 
 and queue messages(note: more complicated than I am making it sound due to 
 storing all accepted messages in pre-partitioned compact binary form), but 
 you're still going to be forced to choose to either start blocking or 
 dropping messages at some point. 
 I have some test cases I am going to port over to the Kafka producer 
 integration ones and start from there. My current impl is in scala, but 
 porting to Java shouldn't be a big deal (was using a promise to track init 
 status, but will likely need to make that an atomic bool). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1835) Kafka new producer needs options to make blocking behavior explicit

2015-01-21 Thread Paul Pearcy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14287035#comment-14287035
 ] 

Paul Pearcy commented on KAFKA-1835:


Created reviewboard https://reviews.apache.org/r/30158/diff/
 against branch origin/trunk

 Kafka new producer needs options to make blocking behavior explicit
 ---

 Key: KAFKA-1835
 URL: https://issues.apache.org/jira/browse/KAFKA-1835
 Project: Kafka
  Issue Type: Improvement
  Components: clients
Affects Versions: 0.8.2, 0.8.3, 0.9.0
Reporter: Paul Pearcy
 Fix For: 0.8.2

 Attachments: KAFKA-1835-New-producer--blocking_v0.patch, 
 KAFKA-1835.patch

   Original Estimate: 504h
  Remaining Estimate: 504h

 The new (0.8.2 standalone) producer will block the first time it attempts to 
 retrieve metadata for a topic. This is not the desired behavior in some use 
 cases where async non-blocking guarantees are required and message loss is 
 acceptable in known cases. Also, most developers will assume an API that 
 returns a future is safe to call in a critical request path. 
 Discussing on the mailing list, the most viable option is to have the 
 following settings:
  pre.initialize.topics=x,y,z
  pre.initialize.timeout=x
  
 This moves potential blocking to the init of the producer and outside of some 
 random request. The potential will still exist for blocking in a corner case 
 where connectivity with Kafka is lost and a topic not included in pre-init 
 has a message sent for the first time. 
 There is the question of what to do when initialization fails. There are a 
 couple of options that I'd like available:
 - Fail creation of the client 
 - Fail all sends until the meta is available 
 Open to input on how the above option should be expressed. 
 It is also worth noting more nuanced solutions exist that could work without 
 the extra settings, they just end up having extra complications and at the 
 end of the day not adding much value. For instance, the producer could accept 
 and queue messages(note: more complicated than I am making it sound due to 
 storing all accepted messages in pre-partitioned compact binary form), but 
 you're still going to be forced to choose to either start blocking or 
 dropping messages at some point. 
 I have some test cases I am going to port over to the Kafka producer 
 integration ones and start from there. My current impl is in scala, but 
 porting to Java shouldn't be a big deal (was using a promise to track init 
 status, but will likely need to make that an atomic bool). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1835) Kafka new producer needs options to make blocking behavior explicit

2015-01-21 Thread Paul Pearcy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14287042#comment-14287042
 ] 

Paul Pearcy commented on KAFKA-1835:


Thanks Ewan. I created a review, added your comments, and will follow up. 

 Kafka new producer needs options to make blocking behavior explicit
 ---

 Key: KAFKA-1835
 URL: https://issues.apache.org/jira/browse/KAFKA-1835
 Project: Kafka
  Issue Type: Improvement
  Components: clients
Affects Versions: 0.8.2, 0.8.3, 0.9.0
Reporter: Paul Pearcy
 Fix For: 0.8.2

 Attachments: KAFKA-1835-New-producer--blocking_v0.patch, 
 KAFKA-1835.patch

   Original Estimate: 504h
  Remaining Estimate: 504h

 The new (0.8.2 standalone) producer will block the first time it attempts to 
 retrieve metadata for a topic. This is not the desired behavior in some use 
 cases where async non-blocking guarantees are required and message loss is 
 acceptable in known cases. Also, most developers will assume an API that 
 returns a future is safe to call in a critical request path. 
 Discussing on the mailing list, the most viable option is to have the 
 following settings:
  pre.initialize.topics=x,y,z
  pre.initialize.timeout=x
  
 This moves potential blocking to the init of the producer and outside of some 
 random request. The potential will still exist for blocking in a corner case 
 where connectivity with Kafka is lost and a topic not included in pre-init 
 has a message sent for the first time. 
 There is the question of what to do when initialization fails. There are a 
 couple of options that I'd like available:
 - Fail creation of the client 
 - Fail all sends until the meta is available 
 Open to input on how the above option should be expressed. 
 It is also worth noting more nuanced solutions exist that could work without 
 the extra settings, they just end up having extra complications and at the 
 end of the day not adding much value. For instance, the producer could accept 
 and queue messages(note: more complicated than I am making it sound due to 
 storing all accepted messages in pre-partitioned compact binary form), but 
 you're still going to be forced to choose to either start blocking or 
 dropping messages at some point. 
 I have some test cases I am going to port over to the Kafka producer 
 integration ones and start from there. My current impl is in scala, but 
 porting to Java shouldn't be a big deal (was using a promise to track init 
 status, but will likely need to make that an atomic bool). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 30158: Patch for KAFKA-1835

2015-01-21 Thread Paul Pearcy

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30158/
---

Review request for kafka.


Bugs: KAFKA-1835
https://issues.apache.org/jira/browse/KAFKA-1835


Repository: kafka


Description
---

KAFKA-1835 - New producer updates to make blocking behavior explicit


Diffs
-

  clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
fc71710dd5997576d3841a1c3b0f7e19a8c9698e 
  clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
8b3e565edd1ae04d8d34bd9f1a41e9fa8c880a75 
  core/src/test/scala/integration/kafka/api/ProducerBlockingTest.scala 
PRE-CREATION 
  core/src/test/scala/unit/kafka/utils/TestUtils.scala 
ac15d34425795d5be20c51b01fa1108bdcd66583 

Diff: https://reviews.apache.org/r/30158/diff/


Testing
---


Thanks,

Paul Pearcy



[jira] [Updated] (KAFKA-1728) update 082 docs

2015-01-21 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy updated KAFKA-1728:
---
Attachment: missing-config-props-0.8.2.patch

Uploaded a patch to add missing config properties to 0.8.2 docs.

 update 082 docs
 ---

 Key: KAFKA-1728
 URL: https://issues.apache.org/jira/browse/KAFKA-1728
 Project: Kafka
  Issue Type: Task
Affects Versions: 0.8.2
Reporter: Jun Rao
Priority: Blocker
 Fix For: 0.8.2

 Attachments: default-config-value-0.8.2.patch, 
 missing-config-props-0.8.2.patch


 We need to update the docs for 082 release.
 https://svn.apache.org/repos/asf/kafka/site/082
 http://kafka.apache.org/082/documentation.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)