Re: Review Request 33620: Patch for KAFKA-1690

2015-05-11 Thread Sriharsha Chintalapani

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33620/
---

(Updated May 11, 2015, 6:31 a.m.)


Review request for kafka.


Bugs: KAFKA-1690
https://issues.apache.org/jira/browse/KAFKA-1690


Repository: kafka


Description (updated)
---

KAFKA-1690. new java producer needs ssl support as a client.


KAFKA-1690. new java producer needs ssl support as a client.


KAFKA-1690. new java producer needs ssl support as a client.


Diffs (updated)
-

  build.gradle fef515b3b2276b1f861e7cc2e33e74c3ce5e405b 
  checkstyle/checkstyle.xml a215ff36e9252879f1e0be5a86fef9a875bb8f38 
  checkstyle/import-control.xml f2e6cec267e67ce8e261341e373718e14a8e8e03 
  clients/src/main/java/org/apache/kafka/clients/ClientUtils.java 
0d68bf1e1e90fe9d5d4397ddf817b9a9af8d9f7a 
  clients/src/main/java/org/apache/kafka/clients/CommonClientConfigs.java 
cf32e4e7c40738fe6d8adc36ae0cfad459ac5b0b 
  clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java 
bdff518b732105823058e6182f445248b45dc388 
  clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
d301be4709f7b112e1f3a39f3c04cfa65f00fa60 
  clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
42b12928781463b56fc4a45d96bb4da2745b6d95 
  clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
187d0004c8c46b6664ddaffecc6166d4b47351e5 
  clients/src/main/java/org/apache/kafka/common/config/AbstractConfig.java 
c4fa058692f50abb4f47bd344119d805c60123f5 
  clients/src/main/java/org/apache/kafka/common/network/Authenticator.java 
PRE-CREATION 
  clients/src/main/java/org/apache/kafka/common/network/Channel.java 
PRE-CREATION 
  
clients/src/main/java/org/apache/kafka/common/network/DefaultAuthenticator.java 
PRE-CREATION 
  
clients/src/main/java/org/apache/kafka/common/network/PlainTextTransportLayer.java
 PRE-CREATION 
  clients/src/main/java/org/apache/kafka/common/network/SSLFactory.java 
PRE-CREATION 
  clients/src/main/java/org/apache/kafka/common/network/SSLTransportLayer.java 
PRE-CREATION 
  clients/src/main/java/org/apache/kafka/common/network/Selectable.java 
b5f8d83e89f9026dc0853e5f92c00b2d7f043e22 
  clients/src/main/java/org/apache/kafka/common/network/Selector.java 
57de0585e5e9a53eb9dcd99cac1ab3eb2086a302 
  clients/src/main/java/org/apache/kafka/common/network/TransportLayer.java 
PRE-CREATION 
  clients/src/main/java/org/apache/kafka/common/protocol/SecurityProtocol.java 
dab1a94dd29563688b6ecf4eeb0e180b06049d3f 
  clients/src/main/java/org/apache/kafka/common/utils/Utils.java 
f73eedb030987f018d8446bb1dcd98d19fa97331 
  clients/src/test/java/org/apache/kafka/common/network/EchoServer.java 
PRE-CREATION 
  clients/src/test/java/org/apache/kafka/common/network/SSLSelectorTest.java 
PRE-CREATION 
  clients/src/test/java/org/apache/kafka/common/network/SelectorTest.java 
d5b306b026e788b4e5479f3419805aa49ae889f3 
  clients/src/test/java/org/apache/kafka/common/utils/UtilsTest.java 
2ebe3c21f611dc133a2dbb8c7dfb0845f8c21498 
  clients/src/test/java/org/apache/kafka/test/TestSSLUtils.java PRE-CREATION 

Diff: https://reviews.apache.org/r/33620/diff/


Testing
---


Thanks,

Sriharsha Chintalapani



[jira] [Commented] (KAFKA-1690) new java producer needs ssl support as a client

2015-05-11 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537641#comment-14537641
 ] 

Sriharsha Chintalapani commented on KAFKA-1690:
---

Updated reviewboard https://reviews.apache.org/r/33620/diff/
 against branch origin/trunk

 new java producer needs ssl support as a client
 ---

 Key: KAFKA-1690
 URL: https://issues.apache.org/jira/browse/KAFKA-1690
 Project: Kafka
  Issue Type: Sub-task
Reporter: Joe Stein
Assignee: Sriharsha Chintalapani
 Fix For: 0.8.3

 Attachments: KAFKA-1690.patch, KAFKA-1690.patch, 
 KAFKA-1690_2015-05-10_23:20:30.patch, KAFKA-1690_2015-05-10_23:31:42.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1690) new java producer needs ssl support as a client

2015-05-11 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani updated KAFKA-1690:
--
Attachment: KAFKA-1690_2015-05-10_23:20:30.patch

 new java producer needs ssl support as a client
 ---

 Key: KAFKA-1690
 URL: https://issues.apache.org/jira/browse/KAFKA-1690
 Project: Kafka
  Issue Type: Sub-task
Reporter: Joe Stein
Assignee: Sriharsha Chintalapani
 Fix For: 0.8.3

 Attachments: KAFKA-1690.patch, KAFKA-1690.patch, 
 KAFKA-1690_2015-05-10_23:20:30.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1690) new java producer needs ssl support as a client

2015-05-11 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537633#comment-14537633
 ] 

Sriharsha Chintalapani commented on KAFKA-1690:
---

Updated reviewboard https://reviews.apache.org/r/33620/diff/
 against branch origin/trunk

 new java producer needs ssl support as a client
 ---

 Key: KAFKA-1690
 URL: https://issues.apache.org/jira/browse/KAFKA-1690
 Project: Kafka
  Issue Type: Sub-task
Reporter: Joe Stein
Assignee: Sriharsha Chintalapani
 Fix For: 0.8.3

 Attachments: KAFKA-1690.patch, KAFKA-1690.patch, 
 KAFKA-1690_2015-05-10_23:20:30.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 33620: Patch for KAFKA-1690

2015-05-11 Thread Sriharsha Chintalapani

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33620/
---

(Updated May 11, 2015, 6:20 a.m.)


Review request for kafka.


Bugs: KAFKA-1690
https://issues.apache.org/jira/browse/KAFKA-1690


Repository: kafka


Description (updated)
---

KAFKA-1690. new java producer needs ssl support as a client.


KAFKA-1690. new java producer needs ssl support as a client.


Diffs (updated)
-

  build.gradle fef515b3b2276b1f861e7cc2e33e74c3ce5e405b 
  checkstyle/checkstyle.xml a215ff36e9252879f1e0be5a86fef9a875bb8f38 
  checkstyle/import-control.xml f2e6cec267e67ce8e261341e373718e14a8e8e03 
  clients/src/main/java/org/apache/kafka/clients/ClientUtils.java 
0d68bf1e1e90fe9d5d4397ddf817b9a9af8d9f7a 
  clients/src/main/java/org/apache/kafka/clients/CommonClientConfigs.java 
cf32e4e7c40738fe6d8adc36ae0cfad459ac5b0b 
  clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java 
bdff518b732105823058e6182f445248b45dc388 
  clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
d301be4709f7b112e1f3a39f3c04cfa65f00fa60 
  clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
42b12928781463b56fc4a45d96bb4da2745b6d95 
  clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
187d0004c8c46b6664ddaffecc6166d4b47351e5 
  clients/src/main/java/org/apache/kafka/common/config/AbstractConfig.java 
c4fa058692f50abb4f47bd344119d805c60123f5 
  clients/src/main/java/org/apache/kafka/common/network/Authenticator.java 
PRE-CREATION 
  clients/src/main/java/org/apache/kafka/common/network/Channel.java 
PRE-CREATION 
  
clients/src/main/java/org/apache/kafka/common/network/DefaultAuthenticator.java 
PRE-CREATION 
  
clients/src/main/java/org/apache/kafka/common/network/PlainTextTransportLayer.java
 PRE-CREATION 
  clients/src/main/java/org/apache/kafka/common/network/SSLFactory.java 
PRE-CREATION 
  clients/src/main/java/org/apache/kafka/common/network/SSLTransportLayer.java 
PRE-CREATION 
  clients/src/main/java/org/apache/kafka/common/network/Selectable.java 
b5f8d83e89f9026dc0853e5f92c00b2d7f043e22 
  clients/src/main/java/org/apache/kafka/common/network/Selector.java 
57de0585e5e9a53eb9dcd99cac1ab3eb2086a302 
  clients/src/main/java/org/apache/kafka/common/network/TransportLayer.java 
PRE-CREATION 
  clients/src/main/java/org/apache/kafka/common/protocol/SecurityProtocol.java 
dab1a94dd29563688b6ecf4eeb0e180b06049d3f 
  clients/src/main/java/org/apache/kafka/common/utils/Utils.java 
f73eedb030987f018d8446bb1dcd98d19fa97331 
  clients/src/test/java/org/apache/kafka/common/network/EchoServer.java 
PRE-CREATION 
  clients/src/test/java/org/apache/kafka/common/network/SSLSelectorTest.java 
PRE-CREATION 
  clients/src/test/java/org/apache/kafka/common/network/SelectorTest.java 
d5b306b026e788b4e5479f3419805aa49ae889f3 
  clients/src/test/java/org/apache/kafka/common/utils/UtilsTest.java 
2ebe3c21f611dc133a2dbb8c7dfb0845f8c21498 
  clients/src/test/java/org/apache/kafka/test/TestSSLUtils.java PRE-CREATION 

Diff: https://reviews.apache.org/r/33620/diff/


Testing
---


Thanks,

Sriharsha Chintalapani



[jira] [Issue Comment Deleted] (KAFKA-1646) Improve consumer read performance for Windows

2015-05-11 Thread Honghai Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Honghai Chen updated KAFKA-1646:

Comment: was deleted

(was: Created reviewboard  against branch origin/trunk)

 Improve consumer read performance for Windows
 -

 Key: KAFKA-1646
 URL: https://issues.apache.org/jira/browse/KAFKA-1646
 Project: Kafka
  Issue Type: Improvement
  Components: log
Affects Versions: 0.8.1.1
 Environment: Windows
Reporter: xueqiang wang
Assignee: xueqiang wang
  Labels: newbie, patch
 Attachments: Improve consumer read performance for Windows.patch, 
 KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
 KAFKA-1646.patch, KAFKA-1646_20141216_163008.patch, 
 KAFKA-1646_20150306_005526.patch, KAFKA-1646_20150312_200352.patch, 
 KAFKA-1646_20150414_035415.patch, KAFKA-1646_20150414_184503.patch, 
 KAFKA-1646_20150511_AddTestcases.patch


 This patch is for Window platform only. In Windows platform, if there are 
 more than one replicas writing to disk, the segment log files will not be 
 consistent in disk and then consumer reading performance will be dropped down 
 greatly. This fix allocates more disk spaces when rolling a new segment, and 
 then it will improve the consumer reading performance in NTFS file system.
 This patch doesn't affect file allocation of other filesystems, for it only 
 adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1646) Improve consumer read performance for Windows

2015-05-11 Thread Honghai Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Honghai Chen updated KAFKA-1646:

Attachment: KAFKA-1646_20150511_AddTestcases.patch

 Improve consumer read performance for Windows
 -

 Key: KAFKA-1646
 URL: https://issues.apache.org/jira/browse/KAFKA-1646
 Project: Kafka
  Issue Type: Improvement
  Components: log
Affects Versions: 0.8.1.1
 Environment: Windows
Reporter: xueqiang wang
Assignee: xueqiang wang
  Labels: newbie, patch
 Attachments: Improve consumer read performance for Windows.patch, 
 KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
 KAFKA-1646.patch, KAFKA-1646_20141216_163008.patch, 
 KAFKA-1646_20150306_005526.patch, KAFKA-1646_20150312_200352.patch, 
 KAFKA-1646_20150414_035415.patch, KAFKA-1646_20150414_184503.patch, 
 KAFKA-1646_20150511_AddTestcases.patch


 This patch is for Window platform only. In Windows platform, if there are 
 more than one replicas writing to disk, the segment log files will not be 
 consistent in disk and then consumer reading performance will be dropped down 
 greatly. This fix allocates more disk spaces when rolling a new segment, and 
 then it will improve the consumer reading performance in NTFS file system.
 This patch doesn't affect file allocation of other filesystems, for it only 
 adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1646) Improve consumer read performance for Windows

2015-05-11 Thread Honghai Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Honghai Chen updated KAFKA-1646:

Attachment: (was: KAFKA-1646_20150422.patch)

 Improve consumer read performance for Windows
 -

 Key: KAFKA-1646
 URL: https://issues.apache.org/jira/browse/KAFKA-1646
 Project: Kafka
  Issue Type: Improvement
  Components: log
Affects Versions: 0.8.1.1
 Environment: Windows
Reporter: xueqiang wang
Assignee: xueqiang wang
  Labels: newbie, patch
 Attachments: Improve consumer read performance for Windows.patch, 
 KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
 KAFKA-1646.patch, KAFKA-1646_20141216_163008.patch, 
 KAFKA-1646_20150306_005526.patch, KAFKA-1646_20150312_200352.patch, 
 KAFKA-1646_20150414_035415.patch, KAFKA-1646_20150414_184503.patch, 
 KAFKA-1646_20150511_AddTestcases.patch


 This patch is for Window platform only. In Windows platform, if there are 
 more than one replicas writing to disk, the segment log files will not be 
 consistent in disk and then consumer reading performance will be dropped down 
 greatly. This fix allocates more disk spaces when rolling a new segment, and 
 then it will improve the consumer reading performance in NTFS file system.
 This patch doesn't affect file allocation of other filesystems, for it only 
 adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1646) Improve consumer read performance for Windows

2015-05-11 Thread Honghai Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537786#comment-14537786
 ] 

Honghai Chen commented on KAFKA-1646:
-

Add test cases https://reviews.apache.org/r/33204/diff/3/

 Improve consumer read performance for Windows
 -

 Key: KAFKA-1646
 URL: https://issues.apache.org/jira/browse/KAFKA-1646
 Project: Kafka
  Issue Type: Improvement
  Components: log
Affects Versions: 0.8.1.1
 Environment: Windows
Reporter: xueqiang wang
Assignee: xueqiang wang
  Labels: newbie, patch
 Attachments: Improve consumer read performance for Windows.patch, 
 KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
 KAFKA-1646.patch, KAFKA-1646_20141216_163008.patch, 
 KAFKA-1646_20150306_005526.patch, KAFKA-1646_20150312_200352.patch, 
 KAFKA-1646_20150414_035415.patch, KAFKA-1646_20150414_184503.patch, 
 KAFKA-1646_20150422.patch


 This patch is for Window platform only. In Windows platform, if there are 
 more than one replicas writing to disk, the segment log files will not be 
 consistent in disk and then consumer reading performance will be dropped down 
 greatly. This fix allocates more disk spaces when rolling a new segment, and 
 then it will improve the consumer reading performance in NTFS file system.
 This patch doesn't affect file allocation of other filesystems, for it only 
 adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1646) Improve consumer read performance for Windows

2015-05-11 Thread Honghai Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Honghai Chen updated KAFKA-1646:

Attachment: (was: KAFKA-1646_20150422.patch)

 Improve consumer read performance for Windows
 -

 Key: KAFKA-1646
 URL: https://issues.apache.org/jira/browse/KAFKA-1646
 Project: Kafka
  Issue Type: Improvement
  Components: log
Affects Versions: 0.8.1.1
 Environment: Windows
Reporter: xueqiang wang
Assignee: xueqiang wang
  Labels: newbie, patch
 Attachments: Improve consumer read performance for Windows.patch, 
 KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
 KAFKA-1646.patch, KAFKA-1646_20141216_163008.patch, 
 KAFKA-1646_20150306_005526.patch, KAFKA-1646_20150312_200352.patch, 
 KAFKA-1646_20150414_035415.patch, KAFKA-1646_20150414_184503.patch, 
 KAFKA-1646_20150422.patch


 This patch is for Window platform only. In Windows platform, if there are 
 more than one replicas writing to disk, the segment log files will not be 
 consistent in disk and then consumer reading performance will be dropped down 
 greatly. This fix allocates more disk spaces when rolling a new segment, and 
 then it will improve the consumer reading performance in NTFS file system.
 This patch doesn't affect file allocation of other filesystems, for it only 
 adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1646) Improve consumer read performance for Windows

2015-05-11 Thread Honghai Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Honghai Chen updated KAFKA-1646:

Attachment: KAFKA-1646_20150422.patch

 Improve consumer read performance for Windows
 -

 Key: KAFKA-1646
 URL: https://issues.apache.org/jira/browse/KAFKA-1646
 Project: Kafka
  Issue Type: Improvement
  Components: log
Affects Versions: 0.8.1.1
 Environment: Windows
Reporter: xueqiang wang
Assignee: xueqiang wang
  Labels: newbie, patch
 Attachments: Improve consumer read performance for Windows.patch, 
 KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
 KAFKA-1646.patch, KAFKA-1646_20141216_163008.patch, 
 KAFKA-1646_20150306_005526.patch, KAFKA-1646_20150312_200352.patch, 
 KAFKA-1646_20150414_035415.patch, KAFKA-1646_20150414_184503.patch, 
 KAFKA-1646_20150422.patch


 This patch is for Window platform only. In Windows platform, if there are 
 more than one replicas writing to disk, the segment log files will not be 
 consistent in disk and then consumer reading performance will be dropped down 
 greatly. This fix allocates more disk spaces when rolling a new segment, and 
 then it will improve the consumer reading performance in NTFS file system.
 This patch doesn't affect file allocation of other filesystems, for it only 
 adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1646) Improve consumer read performance for Windows

2015-05-11 Thread Honghai Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Honghai Chen updated KAFKA-1646:

Attachment: KAFKA-1646_20150511_AddTestcases.patch

 Improve consumer read performance for Windows
 -

 Key: KAFKA-1646
 URL: https://issues.apache.org/jira/browse/KAFKA-1646
 Project: Kafka
  Issue Type: Improvement
  Components: log
Affects Versions: 0.8.1.1
 Environment: Windows
Reporter: xueqiang wang
Assignee: xueqiang wang
  Labels: newbie, patch
 Attachments: Improve consumer read performance for Windows.patch, 
 KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
 KAFKA-1646.patch, KAFKA-1646_20141216_163008.patch, 
 KAFKA-1646_20150306_005526.patch, KAFKA-1646_20150312_200352.patch, 
 KAFKA-1646_20150414_035415.patch, KAFKA-1646_20150414_184503.patch, 
 KAFKA-1646_20150422.patch, KAFKA-1646_20150511_AddTestcases.patch


 This patch is for Window platform only. In Windows platform, if there are 
 more than one replicas writing to disk, the segment log files will not be 
 consistent in disk and then consumer reading performance will be dropped down 
 greatly. This fix allocates more disk spaces when rolling a new segment, and 
 then it will improve the consumer reading performance in NTFS file system.
 This patch doesn't affect file allocation of other filesystems, for it only 
 adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1646) Improve consumer read performance for Windows

2015-05-11 Thread Honghai Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537793#comment-14537793
 ] 

Honghai Chen commented on KAFKA-1646:
-

Created reviewboard  against branch origin/trunk

 Improve consumer read performance for Windows
 -

 Key: KAFKA-1646
 URL: https://issues.apache.org/jira/browse/KAFKA-1646
 Project: Kafka
  Issue Type: Improvement
  Components: log
Affects Versions: 0.8.1.1
 Environment: Windows
Reporter: xueqiang wang
Assignee: xueqiang wang
  Labels: newbie, patch
 Attachments: Improve consumer read performance for Windows.patch, 
 KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
 KAFKA-1646.patch, KAFKA-1646_20141216_163008.patch, 
 KAFKA-1646_20150306_005526.patch, KAFKA-1646_20150312_200352.patch, 
 KAFKA-1646_20150414_035415.patch, KAFKA-1646_20150414_184503.patch, 
 KAFKA-1646_20150422.patch, KAFKA-1646_20150511_AddTestcases.patch


 This patch is for Window platform only. In Windows platform, if there are 
 more than one replicas writing to disk, the segment log files will not be 
 consistent in disk and then consumer reading performance will be dropped down 
 greatly. This fix allocates more disk spaces when rolling a new segment, and 
 then it will improve the consumer reading performance in NTFS file system.
 This patch doesn't affect file allocation of other filesystems, for it only 
 adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (KAFKA-1646) Improve consumer read performance for Windows

2015-05-11 Thread Honghai Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Honghai Chen updated KAFKA-1646:

Comment: was deleted

(was: Created reviewboard  against branch origin/trunk)

 Improve consumer read performance for Windows
 -

 Key: KAFKA-1646
 URL: https://issues.apache.org/jira/browse/KAFKA-1646
 Project: Kafka
  Issue Type: Improvement
  Components: log
Affects Versions: 0.8.1.1
 Environment: Windows
Reporter: xueqiang wang
Assignee: xueqiang wang
  Labels: newbie, patch
 Attachments: Improve consumer read performance for Windows.patch, 
 KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
 KAFKA-1646.patch, KAFKA-1646_20141216_163008.patch, 
 KAFKA-1646_20150306_005526.patch, KAFKA-1646_20150312_200352.patch, 
 KAFKA-1646_20150414_035415.patch, KAFKA-1646_20150414_184503.patch, 
 KAFKA-1646_20150422.patch, KAFKA-1646_20150511_AddTestcases.patch


 This patch is for Window platform only. In Windows platform, if there are 
 more than one replicas writing to disk, the segment log files will not be 
 consistent in disk and then consumer reading performance will be dropped down 
 greatly. This fix allocates more disk spaces when rolling a new segment, and 
 then it will improve the consumer reading performance in NTFS file system.
 This patch doesn't affect file allocation of other filesystems, for it only 
 adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1646) Improve consumer read performance for Windows

2015-05-11 Thread Honghai Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Honghai Chen updated KAFKA-1646:

Attachment: (was: KAFKA-1646_20150511_AddTestcases.patch)

 Improve consumer read performance for Windows
 -

 Key: KAFKA-1646
 URL: https://issues.apache.org/jira/browse/KAFKA-1646
 Project: Kafka
  Issue Type: Improvement
  Components: log
Affects Versions: 0.8.1.1
 Environment: Windows
Reporter: xueqiang wang
Assignee: xueqiang wang
  Labels: newbie, patch
 Attachments: Improve consumer read performance for Windows.patch, 
 KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
 KAFKA-1646.patch, KAFKA-1646_20141216_163008.patch, 
 KAFKA-1646_20150306_005526.patch, KAFKA-1646_20150312_200352.patch, 
 KAFKA-1646_20150414_035415.patch, KAFKA-1646_20150414_184503.patch, 
 KAFKA-1646_20150422.patch


 This patch is for Window platform only. In Windows platform, if there are 
 more than one replicas writing to disk, the segment log files will not be 
 consistent in disk and then consumer reading performance will be dropped down 
 greatly. This fix allocates more disk spaces when rolling a new segment, and 
 then it will improve the consumer reading performance in NTFS file system.
 This patch doesn't affect file allocation of other filesystems, for it only 
 adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1646) Improve consumer read performance for Windows

2015-05-11 Thread Honghai Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Honghai Chen updated KAFKA-1646:

Attachment: KAFKA-1646_20150511_AddTestcases.patch

 Improve consumer read performance for Windows
 -

 Key: KAFKA-1646
 URL: https://issues.apache.org/jira/browse/KAFKA-1646
 Project: Kafka
  Issue Type: Improvement
  Components: log
Affects Versions: 0.8.1.1
 Environment: Windows
Reporter: xueqiang wang
Assignee: xueqiang wang
  Labels: newbie, patch
 Attachments: Improve consumer read performance for Windows.patch, 
 KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
 KAFKA-1646.patch, KAFKA-1646_20141216_163008.patch, 
 KAFKA-1646_20150306_005526.patch, KAFKA-1646_20150312_200352.patch, 
 KAFKA-1646_20150414_035415.patch, KAFKA-1646_20150414_184503.patch, 
 KAFKA-1646_20150422.patch


 This patch is for Window platform only. In Windows platform, if there are 
 more than one replicas writing to disk, the segment log files will not be 
 consistent in disk and then consumer reading performance will be dropped down 
 greatly. This fix allocates more disk spaces when rolling a new segment, and 
 then it will improve the consumer reading performance in NTFS file system.
 This patch doesn't affect file allocation of other filesystems, for it only 
 adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1646) Improve consumer read performance for Windows

2015-05-11 Thread Honghai Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537791#comment-14537791
 ] 

Honghai Chen commented on KAFKA-1646:
-

Created reviewboard  against branch origin/trunk

 Improve consumer read performance for Windows
 -

 Key: KAFKA-1646
 URL: https://issues.apache.org/jira/browse/KAFKA-1646
 Project: Kafka
  Issue Type: Improvement
  Components: log
Affects Versions: 0.8.1.1
 Environment: Windows
Reporter: xueqiang wang
Assignee: xueqiang wang
  Labels: newbie, patch
 Attachments: Improve consumer read performance for Windows.patch, 
 KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
 KAFKA-1646.patch, KAFKA-1646_20141216_163008.patch, 
 KAFKA-1646_20150306_005526.patch, KAFKA-1646_20150312_200352.patch, 
 KAFKA-1646_20150414_035415.patch, KAFKA-1646_20150414_184503.patch, 
 KAFKA-1646_20150422.patch


 This patch is for Window platform only. In Windows platform, if there are 
 more than one replicas writing to disk, the segment log files will not be 
 consistent in disk and then consumer reading performance will be dropped down 
 greatly. This fix allocates more disk spaces when rolling a new segment, and 
 then it will improve the consumer reading performance in NTFS file system.
 This patch doesn't affect file allocation of other filesystems, for it only 
 adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1646) Improve consumer read performance for Windows

2015-05-11 Thread Honghai Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Honghai Chen updated KAFKA-1646:

Attachment: (was: KAFKA-1646_20150511_AddTestcases.patch)

 Improve consumer read performance for Windows
 -

 Key: KAFKA-1646
 URL: https://issues.apache.org/jira/browse/KAFKA-1646
 Project: Kafka
  Issue Type: Improvement
  Components: log
Affects Versions: 0.8.1.1
 Environment: Windows
Reporter: xueqiang wang
Assignee: xueqiang wang
  Labels: newbie, patch
 Attachments: Improve consumer read performance for Windows.patch, 
 KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
 KAFKA-1646.patch, KAFKA-1646_20141216_163008.patch, 
 KAFKA-1646_20150306_005526.patch, KAFKA-1646_20150312_200352.patch, 
 KAFKA-1646_20150414_035415.patch, KAFKA-1646_20150414_184503.patch, 
 KAFKA-1646_20150422.patch


 This patch is for Window platform only. In Windows platform, if there are 
 more than one replicas writing to disk, the segment log files will not be 
 consistent in disk and then consumer reading performance will be dropped down 
 greatly. This fix allocates more disk spaces when rolling a new segment, and 
 then it will improve the consumer reading performance in NTFS file system.
 This patch doesn't affect file allocation of other filesystems, for it only 
 adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1646) Improve consumer read performance for Windows

2015-05-11 Thread Honghai Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537796#comment-14537796
 ] 

Honghai Chen commented on KAFKA-1646:
-

Created reviewboard  against branch origin/trunk

 Improve consumer read performance for Windows
 -

 Key: KAFKA-1646
 URL: https://issues.apache.org/jira/browse/KAFKA-1646
 Project: Kafka
  Issue Type: Improvement
  Components: log
Affects Versions: 0.8.1.1
 Environment: Windows
Reporter: xueqiang wang
Assignee: xueqiang wang
  Labels: newbie, patch
 Attachments: Improve consumer read performance for Windows.patch, 
 KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
 KAFKA-1646.patch, KAFKA-1646_20141216_163008.patch, 
 KAFKA-1646_20150306_005526.patch, KAFKA-1646_20150312_200352.patch, 
 KAFKA-1646_20150414_035415.patch, KAFKA-1646_20150414_184503.patch, 
 KAFKA-1646_20150511_AddTestcases.patch


 This patch is for Window platform only. In Windows platform, if there are 
 more than one replicas writing to disk, the segment log files will not be 
 consistent in disk and then consumer reading performance will be dropped down 
 greatly. This fix allocates more disk spaces when rolling a new segment, and 
 then it will improve the consumer reading performance in NTFS file system.
 This patch doesn't affect file allocation of other filesystems, for it only 
 adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (KAFKA-1646) Improve consumer read performance for Windows

2015-05-11 Thread Honghai Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Honghai Chen updated KAFKA-1646:

Comment: was deleted

(was: Created reviewboard  against branch origin/trunk)

 Improve consumer read performance for Windows
 -

 Key: KAFKA-1646
 URL: https://issues.apache.org/jira/browse/KAFKA-1646
 Project: Kafka
  Issue Type: Improvement
  Components: log
Affects Versions: 0.8.1.1
 Environment: Windows
Reporter: xueqiang wang
Assignee: xueqiang wang
  Labels: newbie, patch
 Attachments: Improve consumer read performance for Windows.patch, 
 KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
 KAFKA-1646_20141216_163008.patch, KAFKA-1646_20150306_005526.patch, 
 KAFKA-1646_20150511_AddTestcases.patch


 This patch is for Window platform only. In Windows platform, if there are 
 more than one replicas writing to disk, the segment log files will not be 
 consistent in disk and then consumer reading performance will be dropped down 
 greatly. This fix allocates more disk spaces when rolling a new segment, and 
 then it will improve the consumer reading performance in NTFS file system.
 This patch doesn't affect file allocation of other filesystems, for it only 
 adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1646) Improve consumer read performance for Windows

2015-05-11 Thread Honghai Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Honghai Chen updated KAFKA-1646:

Attachment: (was: KAFKA-1646_20150414_184503.patch)

 Improve consumer read performance for Windows
 -

 Key: KAFKA-1646
 URL: https://issues.apache.org/jira/browse/KAFKA-1646
 Project: Kafka
  Issue Type: Improvement
  Components: log
Affects Versions: 0.8.1.1
 Environment: Windows
Reporter: xueqiang wang
Assignee: xueqiang wang
  Labels: newbie, patch
 Attachments: Improve consumer read performance for Windows.patch, 
 KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
 KAFKA-1646_20141216_163008.patch, KAFKA-1646_20150306_005526.patch, 
 KAFKA-1646_20150511_AddTestcases.patch


 This patch is for Window platform only. In Windows platform, if there are 
 more than one replicas writing to disk, the segment log files will not be 
 consistent in disk and then consumer reading performance will be dropped down 
 greatly. This fix allocates more disk spaces when rolling a new segment, and 
 then it will improve the consumer reading performance in NTFS file system.
 This patch doesn't affect file allocation of other filesystems, for it only 
 adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1646) Improve consumer read performance for Windows

2015-05-11 Thread Honghai Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Honghai Chen updated KAFKA-1646:

Attachment: (was: KAFKA-1646_20150414_035415.patch)

 Improve consumer read performance for Windows
 -

 Key: KAFKA-1646
 URL: https://issues.apache.org/jira/browse/KAFKA-1646
 Project: Kafka
  Issue Type: Improvement
  Components: log
Affects Versions: 0.8.1.1
 Environment: Windows
Reporter: xueqiang wang
Assignee: xueqiang wang
  Labels: newbie, patch
 Attachments: Improve consumer read performance for Windows.patch, 
 KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
 KAFKA-1646_20141216_163008.patch, KAFKA-1646_20150306_005526.patch, 
 KAFKA-1646_20150511_AddTestcases.patch


 This patch is for Window platform only. In Windows platform, if there are 
 more than one replicas writing to disk, the segment log files will not be 
 consistent in disk and then consumer reading performance will be dropped down 
 greatly. This fix allocates more disk spaces when rolling a new segment, and 
 then it will improve the consumer reading performance in NTFS file system.
 This patch doesn't affect file allocation of other filesystems, for it only 
 adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 33204: Patch for KAFKA-1646 add test cases

2015-05-11 Thread Honghai Chen

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33204/
---

(Updated May 11, 2015, 10 a.m.)


Review request for kafka.


Summary (updated)
-

Patch for KAFKA-1646 add test cases


Bugs: KAFKA-1646
https://issues.apache.org/jira/browse/KAFKA-1646


Repository: kafka


Description (updated)
---

Kafka 1646 fix add test cases


Diffs (updated)
-

  core/src/main/scala/kafka/log/FileMessageSet.scala 
2522604bd985c513527fa0c863a7df677ff7a503 
  core/src/main/scala/kafka/log/Log.scala 
84e7b8fe9dd014884b60c4fbe13c835cf02a40e4 
  core/src/main/scala/kafka/log/LogConfig.scala 
a907da09e1ccede3b446459225e407cd1ae6d8b3 
  core/src/main/scala/kafka/log/LogSegment.scala 
ed039539ac18ea4d65144073915cf112f7374631 
  core/src/main/scala/kafka/server/KafkaConfig.scala 
9efa15ca5567b295ab412ee9eea7c03eb4cdc18b 
  core/src/main/scala/kafka/server/KafkaServer.scala 
b7d2a2842e17411a823b93bdedc84657cbd62be1 
  core/src/main/scala/kafka/utils/CoreUtils.scala 
d0a8fa701564b4c13b3cd6501e1b6218d77e8e06 
  core/src/test/scala/unit/kafka/log/FileMessageSetTest.scala 
cec1caecc51507ae339ebf8f3b8a028b12a1a056 
  core/src/test/scala/unit/kafka/log/LogSegmentTest.scala 
03fb3512c4a4450eac83d4cd4b0919baeaa22942 

Diff: https://reviews.apache.org/r/33204/diff/


Testing
---


Thanks,

Honghai Chen



[jira] [Updated] (KAFKA-1646) Improve consumer read performance for Windows

2015-05-11 Thread Honghai Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Honghai Chen updated KAFKA-1646:

Attachment: (was: KAFKA-1646_20150312_200352.patch)

 Improve consumer read performance for Windows
 -

 Key: KAFKA-1646
 URL: https://issues.apache.org/jira/browse/KAFKA-1646
 Project: Kafka
  Issue Type: Improvement
  Components: log
Affects Versions: 0.8.1.1
 Environment: Windows
Reporter: xueqiang wang
Assignee: xueqiang wang
  Labels: newbie, patch
 Attachments: Improve consumer read performance for Windows.patch, 
 KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
 KAFKA-1646_20141216_163008.patch, KAFKA-1646_20150306_005526.patch, 
 KAFKA-1646_20150414_035415.patch, KAFKA-1646_20150414_184503.patch, 
 KAFKA-1646_20150511_AddTestcases.patch


 This patch is for Window platform only. In Windows platform, if there are 
 more than one replicas writing to disk, the segment log files will not be 
 consistent in disk and then consumer reading performance will be dropped down 
 greatly. This fix allocates more disk spaces when rolling a new segment, and 
 then it will improve the consumer reading performance in NTFS file system.
 This patch doesn't affect file allocation of other filesystems, for it only 
 adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1646) Improve consumer read performance for Windows

2015-05-11 Thread Honghai Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Honghai Chen updated KAFKA-1646:

Attachment: (was: KAFKA-1646.patch)

 Improve consumer read performance for Windows
 -

 Key: KAFKA-1646
 URL: https://issues.apache.org/jira/browse/KAFKA-1646
 Project: Kafka
  Issue Type: Improvement
  Components: log
Affects Versions: 0.8.1.1
 Environment: Windows
Reporter: xueqiang wang
Assignee: xueqiang wang
  Labels: newbie, patch
 Attachments: Improve consumer read performance for Windows.patch, 
 KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
 KAFKA-1646_20141216_163008.patch, KAFKA-1646_20150306_005526.patch, 
 KAFKA-1646_20150312_200352.patch, KAFKA-1646_20150414_035415.patch, 
 KAFKA-1646_20150414_184503.patch, KAFKA-1646_20150511_AddTestcases.patch


 This patch is for Window platform only. In Windows platform, if there are 
 more than one replicas writing to disk, the segment log files will not be 
 consistent in disk and then consumer reading performance will be dropped down 
 greatly. This fix allocates more disk spaces when rolling a new segment, and 
 then it will improve the consumer reading performance in NTFS file system.
 This patch doesn't affect file allocation of other filesystems, for it only 
 adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1646) Improve consumer read performance for Windows

2015-05-11 Thread Honghai Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537787#comment-14537787
 ] 

Honghai Chen commented on KAFKA-1646:
-

Created reviewboard  against branch origin/trunk

 Improve consumer read performance for Windows
 -

 Key: KAFKA-1646
 URL: https://issues.apache.org/jira/browse/KAFKA-1646
 Project: Kafka
  Issue Type: Improvement
  Components: log
Affects Versions: 0.8.1.1
 Environment: Windows
Reporter: xueqiang wang
Assignee: xueqiang wang
  Labels: newbie, patch
 Attachments: Improve consumer read performance for Windows.patch, 
 KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
 KAFKA-1646.patch, KAFKA-1646_20141216_163008.patch, 
 KAFKA-1646_20150306_005526.patch, KAFKA-1646_20150312_200352.patch, 
 KAFKA-1646_20150414_035415.patch, KAFKA-1646_20150414_184503.patch, 
 KAFKA-1646_20150422.patch


 This patch is for Window platform only. In Windows platform, if there are 
 more than one replicas writing to disk, the segment log files will not be 
 consistent in disk and then consumer reading performance will be dropped down 
 greatly. This fix allocates more disk spaces when rolling a new segment, and 
 then it will improve the consumer reading performance in NTFS file system.
 This patch doesn't affect file allocation of other filesystems, for it only 
 adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (KAFKA-1646) Improve consumer read performance for Windows

2015-05-11 Thread Honghai Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Honghai Chen updated KAFKA-1646:

Comment: was deleted

(was: Created reviewboard  against branch origin/trunk)

 Improve consumer read performance for Windows
 -

 Key: KAFKA-1646
 URL: https://issues.apache.org/jira/browse/KAFKA-1646
 Project: Kafka
  Issue Type: Improvement
  Components: log
Affects Versions: 0.8.1.1
 Environment: Windows
Reporter: xueqiang wang
Assignee: xueqiang wang
  Labels: newbie, patch
 Attachments: Improve consumer read performance for Windows.patch, 
 KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
 KAFKA-1646.patch, KAFKA-1646_20141216_163008.patch, 
 KAFKA-1646_20150306_005526.patch, KAFKA-1646_20150312_200352.patch, 
 KAFKA-1646_20150414_035415.patch, KAFKA-1646_20150414_184503.patch, 
 KAFKA-1646_20150422.patch


 This patch is for Window platform only. In Windows platform, if there are 
 more than one replicas writing to disk, the segment log files will not be 
 consistent in disk and then consumer reading performance will be dropped down 
 greatly. This fix allocates more disk spaces when rolling a new segment, and 
 then it will improve the consumer reading performance in NTFS file system.
 This patch doesn't affect file allocation of other filesystems, for it only 
 adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1646) Improve consumer read performance for Windows

2015-05-11 Thread Honghai Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Honghai Chen updated KAFKA-1646:

Attachment: (was: KAFKA-1646_20150511_AddTestcases.patch)

 Improve consumer read performance for Windows
 -

 Key: KAFKA-1646
 URL: https://issues.apache.org/jira/browse/KAFKA-1646
 Project: Kafka
  Issue Type: Improvement
  Components: log
Affects Versions: 0.8.1.1
 Environment: Windows
Reporter: xueqiang wang
Assignee: xueqiang wang
  Labels: newbie, patch
 Attachments: Improve consumer read performance for Windows.patch, 
 KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
 KAFKA-1646.patch, KAFKA-1646_20141216_163008.patch, 
 KAFKA-1646_20150306_005526.patch, KAFKA-1646_20150312_200352.patch, 
 KAFKA-1646_20150414_035415.patch, KAFKA-1646_20150414_184503.patch


 This patch is for Window platform only. In Windows platform, if there are 
 more than one replicas writing to disk, the segment log files will not be 
 consistent in disk and then consumer reading performance will be dropped down 
 greatly. This fix allocates more disk spaces when rolling a new segment, and 
 then it will improve the consumer reading performance in NTFS file system.
 This patch doesn't affect file allocation of other filesystems, for it only 
 adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1646) Improve consumer read performance for Windows

2015-05-11 Thread Honghai Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Honghai Chen updated KAFKA-1646:

Attachment: KAFKA-1646_20150511_AddTestcases.patch

 Improve consumer read performance for Windows
 -

 Key: KAFKA-1646
 URL: https://issues.apache.org/jira/browse/KAFKA-1646
 Project: Kafka
  Issue Type: Improvement
  Components: log
Affects Versions: 0.8.1.1
 Environment: Windows
Reporter: xueqiang wang
Assignee: xueqiang wang
  Labels: newbie, patch
 Attachments: Improve consumer read performance for Windows.patch, 
 KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
 KAFKA-1646.patch, KAFKA-1646_20141216_163008.patch, 
 KAFKA-1646_20150306_005526.patch, KAFKA-1646_20150312_200352.patch, 
 KAFKA-1646_20150414_035415.patch, KAFKA-1646_20150414_184503.patch, 
 KAFKA-1646_20150511_AddTestcases.patch


 This patch is for Window platform only. In Windows platform, if there are 
 more than one replicas writing to disk, the segment log files will not be 
 consistent in disk and then consumer reading performance will be dropped down 
 greatly. This fix allocates more disk spaces when rolling a new segment, and 
 then it will improve the consumer reading performance in NTFS file system.
 This patch doesn't affect file allocation of other filesystems, for it only 
 adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1646) Improve consumer read performance for Windows

2015-05-11 Thread Honghai Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537798#comment-14537798
 ] 

Honghai Chen commented on KAFKA-1646:
-

Created reviewboard  against branch origin/trunk

 Improve consumer read performance for Windows
 -

 Key: KAFKA-1646
 URL: https://issues.apache.org/jira/browse/KAFKA-1646
 Project: Kafka
  Issue Type: Improvement
  Components: log
Affects Versions: 0.8.1.1
 Environment: Windows
Reporter: xueqiang wang
Assignee: xueqiang wang
  Labels: newbie, patch
 Attachments: Improve consumer read performance for Windows.patch, 
 KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
 KAFKA-1646.patch, KAFKA-1646_20141216_163008.patch, 
 KAFKA-1646_20150306_005526.patch, KAFKA-1646_20150312_200352.patch, 
 KAFKA-1646_20150414_035415.patch, KAFKA-1646_20150414_184503.patch, 
 KAFKA-1646_20150511_AddTestcases.patch


 This patch is for Window platform only. In Windows platform, if there are 
 more than one replicas writing to disk, the segment log files will not be 
 consistent in disk and then consumer reading performance will be dropped down 
 greatly. This fix allocates more disk spaces when rolling a new segment, and 
 then it will improve the consumer reading performance in NTFS file system.
 This patch doesn't affect file allocation of other filesystems, for it only 
 adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1977) Make logEndOffset available in the Zookeeper consumer

2015-05-11 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-1977:
---
Status: In Progress  (was: Patch Available)

[~willf], thanks for the patch. This is a useful feature to add. I agree with 
Joe that we probably should just patch this in the new java consumer.

 Make logEndOffset available in the Zookeeper consumer
 -

 Key: KAFKA-1977
 URL: https://issues.apache.org/jira/browse/KAFKA-1977
 Project: Kafka
  Issue Type: Improvement
  Components: core
Reporter: Will Funnell
Priority: Minor
 Attachments: 
 Make_logEndOffset_available_in_the_Zookeeper_consumer.patch


 The requirement is to create a snapshot from the Kafka topic but NOT do 
 continual reads after that point. For example you might be creating a backup 
 of the data to a file.
 In order to achieve that, a recommended solution by Joel Koshy and Jay Kreps 
 was to expose the high watermark, as maxEndOffset, from the FetchResponse 
 object through to each MessageAndMetadata object in order to be aware when 
 the consumer has reached the end of each partition.
 The submitted patch achieves this by adding the maxEndOffset to the 
 PartitionTopicInfo, which is updated when a new message arrives in the 
 ConsumerFetcherThread and then exposed in MessageAndMetadata.
 See here for discussion:
 http://search-hadoop.com/m/4TaT4TpJy71



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2184) ConsumerConfig does not honor default java.util.Properties

2015-05-11 Thread Jason Whaley (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Whaley updated KAFKA-2184:

Description: 
When creating a ConsumerConfig from java.util.Properties, an 
IllegalArgumentException is thrown when the Properties instance is converted to 
a VerifiableProperties instance.  To reproduce:

{code}
package com.test;

import kafka.consumer.ConsumerConfig;

import java.util.Properties;

public class ContainsKeyTest {
public static void main(String[] args) {
Properties defaultProperties = new Properties();
defaultProperties.put(zookeeper.connect, 192.168.50.4:2181);
defaultProperties.put(zookeeper.session.timeout.ms, 400);
defaultProperties.put(zookeeper.sync.time.ms, 200);
defaultProperties.put(auto.commit.interval.ms, 1000);
defaultProperties.put(group.id, consumerGroup);

Properties props = new Properties(defaultProperties);

//prints 192.168.50.4:2181
System.out.println(props.getProperty(zookeeper.connect));  

//throws java.lang.IllegalArgumentException: requirement failed: 
Missing required property 'zookeeper.connect'
ConsumerConfig config = new ConsumerConfig(props); 
}
}
{code}

This is easy enough to work around, but default Properties should be honored by 
not calling containsKey inside of kafka.utils.VerifiableProperties#getString


  was:
When creating a ConsumerConfig from java.util.Properties, an 
IllegalArgumentException is thrown when the Properties instance is converted to 
a VerifiableProperties instance.  To reproduce:

{code}
package com.test;

import kafka.consumer.ConsumerConfig;

import java.util.Properties;

public class ContainsKeyTest {
public static void main(String[] args) {
Properties defaultProperties = new Properties();
defaultProperties.put(zookeeper.connect, 192.168.50.4:2181);
defaultProperties.put(zookeeper.session.timeout.ms, 400);
defaultProperties.put(zookeeper.sync.time.ms, 200);
defaultProperties.put(auto.commit.interval.ms, 1000);
defaultProperties.put(group.id, consumerGroup);

Properties props = new Properties(defaultProperties);

//prints 192.168.50.4:2181
System.out.println(props.getProperty(zookeeper.connect));  

//throws java.lang.IllegalArgumentException: requirement failed: 
Missing required property 'zookeeper.connect'
ConsumerConfig config = new ConsumerConfig(props); 
}
}
{code}

This is easy enough to work around, but default Properties should be honored by 
not calling containsKey inside of kafka.utils.VerifiableProperties#getString 
method



 ConsumerConfig does not honor default java.util.Properties
 --

 Key: KAFKA-2184
 URL: https://issues.apache.org/jira/browse/KAFKA-2184
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Affects Versions: 0.8.2.0
Reporter: Jason Whaley
Assignee: Neha Narkhede
Priority: Minor

 When creating a ConsumerConfig from java.util.Properties, an 
 IllegalArgumentException is thrown when the Properties instance is converted 
 to a VerifiableProperties instance.  To reproduce:
 {code}
 package com.test;
 import kafka.consumer.ConsumerConfig;
 import java.util.Properties;
 public class ContainsKeyTest {
 public static void main(String[] args) {
 Properties defaultProperties = new Properties();
 defaultProperties.put(zookeeper.connect, 192.168.50.4:2181);
 defaultProperties.put(zookeeper.session.timeout.ms, 400);
 defaultProperties.put(zookeeper.sync.time.ms, 200);
 defaultProperties.put(auto.commit.interval.ms, 1000);
 defaultProperties.put(group.id, consumerGroup);
 Properties props = new Properties(defaultProperties);
 //prints 192.168.50.4:2181
 System.out.println(props.getProperty(zookeeper.connect));  
 //throws java.lang.IllegalArgumentException: requirement failed: 
 Missing required property 'zookeeper.connect'
 ConsumerConfig config = new ConsumerConfig(props); 
 }
 }
 {code}
 This is easy enough to work around, but default Properties should be honored 
 by not calling containsKey inside of 
 kafka.utils.VerifiableProperties#getString



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 33916: Patch for KAFKA-2163

2015-05-11 Thread Jun Rao

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33916/#review83264
---

Ship it!


Thanks for the patch. Looks good. Just a couple of minor comments below.


core/src/main/scala/kafka/server/OffsetManager.scala
https://reviews.apache.org/r/33916/#comment134170

Should this be changed from debug to info?



core/src/main/scala/kafka/server/OffsetManager.scala
https://reviews.apache.org/r/33916/#comment134172

Should this be info level logging?


- Jun Rao


On May 6, 2015, 10:06 p.m., Joel Koshy wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/33916/
 ---
 
 (Updated May 6, 2015, 10:06 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-2163
 https://issues.apache.org/jira/browse/KAFKA-2163
 
 
 Repository: kafka
 
 
 Description
 ---
 
 fix
 
 
 renames and logging improvements
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/cluster/Partition.scala 
 122b1dbbe45cb27aed79b5be1e735fb617c716b0 
   core/src/main/scala/kafka/server/OffsetManager.scala 
 18680ce100f10035175cc0263ba7787ab0f6a17a 
 
 Diff: https://reviews.apache.org/r/33916/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Joel Koshy
 




Re: [DISCUSS] KIP-21 Configuration Management

2015-05-11 Thread Jay Kreps
I totally agree that ZK is not in-and-of-itself a configuration management
solution and it would be better if we could just keep all our config in
files. Anyone who has followed the various config discussions over the past
few years of discussion knows I'm the biggest proponent of immutable
file-driven config.

The analogy to normal unix services isn't actually quite right though.
The problem Kafka has is that a number of the configurable entities it
manages are added dynamically--topics, clients, consumer groups, etc. What
this actually resembles is not a unix services like HTTPD but a database,
and databases typically do manage config dynamically for exactly the same
reason.

The last few emails are arguing that files  ZK as a config solution. I
agree with this, but that isn't really the question, right?The reality is
that we need to be able to configure dynamically created entities and we
won't get a satisfactory solution to that using files (e.g. rsync is not an
acceptable topic creation mechanism). What we are discussing is having a
single config mechanism or multiple. If we have multiple you need to solve
the whole config lifecycle problem for both--management, audit, rollback,
etc.

Gwen, you were saying we couldn't get rid of the configuration file, not
sure if I understand. Is that because we need to give the URL for ZK?
Wouldn't the same argument work to say that we can't use configuration
files because we have to specify the file path? I think we can just give
the server the same --zookeeper argument we use everywhere else, right?

-Jay

On Sun, May 10, 2015 at 11:28 AM, Todd Palino tpal...@gmail.com wrote:

 I've been watching this discussion for a while, and I have to jump in and
 side with Gwen here. I see no benefit to putting the configs into Zookeeper
 entirely, and a lot of downside. The two biggest problems I have with this
 are:

 1) Configuration management. OK, so you can write glue for Chef to put
 configs into Zookeeper. You also need to write glue for Puppet. And
 Cfengine. And everything else out there. Files are an industry standard
 practice, they're how just about everyone handles it, and there's reasons
 for that, not just it's the way it's always been done.

 2) Auditing. Configuration files can easily be managed in a source
 repository system which tracks what changes were made and who made them. It
 also easily allows for rolling back to a previous version. Zookeeper does
 not.

 I see absolutely nothing wrong with putting the quota (client) configs and
 the topic config overrides in Zookeeper, and keeping everything else
 exactly where it is, in the configuration file. To handle configurations
 for the broker that can be changed at runtime without a restart, you can
 use the industry standard practice of catching SIGHUP and rereading the
 configuration file at that point.

 -Todd


 On Sun, May 10, 2015 at 4:00 AM, Gwen Shapira gshap...@cloudera.com
 wrote:

  I am still not clear about the benefits of managing configuration in
  ZooKeeper vs. keeping the local file and adding a refresh mechanism
  (signal, protocol, zookeeper, or other).
 
  Benefits of staying with configuration file:
  1. In line with pretty much any Linux service that exists, so admins
 have a
  lot of related experience.
  2. Much smaller change to our code-base, so easier to patch, review and
  test. Lower risk overall.
 
  Can you walk me over the benefits of using Zookeeper? Especially since it
  looks like we can't get rid of the file entirely?
 
  Gwen
 
  On Thu, May 7, 2015 at 3:33 AM, Jun Rao j...@confluent.io wrote:
 
   One of the Chef users confirmed that Chef integration could still work
 if
   all configs are moved to ZK. My rough understanding of how Chef works
 is
   that a user first registers a service host with a Chef server. After
  that,
   a Chef client will be run on the service host. The user can then push
   config changes intended for a service/host to the Chef server. The
 server
   is then responsible for pushing the changes to Chef clients. Chef
 clients
   support pluggable logic. For example, it can generate a config file
 that
   Kafka broker will take. If we move all configs to ZK, we can customize
  the
   Chef client to use our config CLI to make the config changes in Kafka.
 In
   this model, one probably doesn't need to register every broker in Chef
  for
   the config push. Not sure if Puppet works in a similar way.
  
   Also for storing the configs, we probably can't store the broker/global
   level configs in Kafka itself (e.g. in a special topic). The reason is
  that
   in order to start a broker, we likely need to make some broker level
  config
   changes (e.g., the default log.dir may not be present, the default port
  may
   not be available, etc). If we need a broker to be up to make those
  changes,
   we get into this chicken and egg problem.
  
   Thanks,
  
   Jun
  
   On Tue, May 5, 2015 at 4:14 PM, Gwen Shapira gshap...@cloudera.com
   wrote:
  
  

[jira] [Updated] (KAFKA-2169) Upgrade to zkclient-0.5

2015-05-11 Thread Parth Brahmbhatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Brahmbhatt updated KAFKA-2169:

Attachment: KAFKA-2169.patch

 Upgrade to zkclient-0.5
 ---

 Key: KAFKA-2169
 URL: https://issues.apache.org/jira/browse/KAFKA-2169
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.2.0
Reporter: Neha Narkhede
Assignee: Parth Brahmbhatt
Priority: Critical
 Attachments: KAFKA-2169.patch, KAFKA-2169.patch


 zkclient-0.5 is released 
 http://mvnrepository.com/artifact/com.101tec/zkclient/0.5 and has the fix for 
 KAFKA-824



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2169) Upgrade to zkclient-0.5

2015-05-11 Thread Parth Brahmbhatt (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538376#comment-14538376
 ] 

Parth Brahmbhatt commented on KAFKA-2169:
-

Created reviewboard https://reviews.apache.org/r/34050/diff/
 against branch origin/trunk

 Upgrade to zkclient-0.5
 ---

 Key: KAFKA-2169
 URL: https://issues.apache.org/jira/browse/KAFKA-2169
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.2.0
Reporter: Neha Narkhede
Assignee: Parth Brahmbhatt
Priority: Critical
 Attachments: KAFKA-2169.patch, KAFKA-2169.patch


 zkclient-0.5 is released 
 http://mvnrepository.com/artifact/com.101tec/zkclient/0.5 and has the fix for 
 KAFKA-824



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2169) Upgrade to zkclient-0.5

2015-05-11 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538709#comment-14538709
 ] 

Jun Rao commented on KAFKA-2169:


Parth,

1. Have you done the api compatibility test?
3. Did you address the comment on handleSessionEstablishmentError() in the RB?

Thanks,

 Upgrade to zkclient-0.5
 ---

 Key: KAFKA-2169
 URL: https://issues.apache.org/jira/browse/KAFKA-2169
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.2.0
Reporter: Neha Narkhede
Assignee: Parth Brahmbhatt
Priority: Critical
 Attachments: KAFKA-2169.patch, KAFKA-2169.patch, 
 KAFKA-2169_2015-05-11_13:52:57.patch


 zkclient-0.5 is released 
 http://mvnrepository.com/artifact/com.101tec/zkclient/0.5 and has the fix for 
 KAFKA-824



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2169) Upgrade to zkclient-0.5

2015-05-11 Thread Parth Brahmbhatt (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538713#comment-14538713
 ] 

Parth Brahmbhatt commented on KAFKA-2169:
-

[~junrao]
1) Yes I tested with 0.8.2 and it works fine.
2) I commented on the RB and updated it.

 Upgrade to zkclient-0.5
 ---

 Key: KAFKA-2169
 URL: https://issues.apache.org/jira/browse/KAFKA-2169
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.2.0
Reporter: Neha Narkhede
Assignee: Parth Brahmbhatt
Priority: Critical
 Attachments: KAFKA-2169.patch, KAFKA-2169.patch, 
 KAFKA-2169_2015-05-11_13:52:57.patch


 zkclient-0.5 is released 
 http://mvnrepository.com/artifact/com.101tec/zkclient/0.5 and has the fix for 
 KAFKA-824



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 34050: Patch for KAFKA-2169

2015-05-11 Thread Parth Brahmbhatt

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34050/#review83286
---



core/src/main/scala/kafka/consumer/ZookeeperTopicEventWatcher.scala
https://reviews.apache.org/r/34050/#comment134190

Actually it does not throw any exception now that we are just using 
System.exit. I have removed the  @throws annotation.



core/src/main/scala/kafka/controller/KafkaController.scala
https://reviews.apache.org/r/34050/#comment134200

Why would we want to do this? If the listeners are invoked twice as long as 
both of them exit whichever one gets invoked first will just kill the process 
and the other one will not be invoked. Why would we care which System.exit 
kills the process?


- Parth Brahmbhatt


On May 11, 2015, 8:53 p.m., Parth Brahmbhatt wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/34050/
 ---
 
 (Updated May 11, 2015, 8:53 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-2169
 https://issues.apache.org/jira/browse/KAFKA-2169
 
 
 Repository: kafka
 
 
 Description
 ---
 
 System.exit instead of throwing RuntimeException when zokeeper session 
 establishment fails.
 
 
 Removing the unnecessary @throws.
 
 
 Diffs
 -
 
   build.gradle fef515b3b2276b1f861e7cc2e33e74c3ce5e405b 
   core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
 aa8d9404a3e78a365df06404b79d0d8f694b4bd6 
   core/src/main/scala/kafka/consumer/ZookeeperTopicEventWatcher.scala 
 38f4ec0bd1b388cc8fc04b38bbb2e7aaa1c3f43b 
   core/src/main/scala/kafka/controller/KafkaController.scala 
 a6351163f5b6f080d6fa50bcc3533d445fcbc067 
   core/src/main/scala/kafka/server/KafkaHealthcheck.scala 
 861b7f644941f88ce04a4e95f6b28d18bf1db16d 
 
 Diff: https://reviews.apache.org/r/34050/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Parth Brahmbhatt
 




Re: Review Request 34050: Patch for KAFKA-2169

2015-05-11 Thread Jun Rao


 On May 11, 2015, 8:25 p.m., Jun Rao wrote:
  core/src/main/scala/kafka/controller/KafkaController.scala, lines 1116-1120
  https://reviews.apache.org/r/34050/diff/1/?file=955526#file955526line1116
 
  We register two StateListeners on the same zkclient instance in the 
  broker. If we can't establish a new ZK session, both listeners will be 
  called. However, we only need to exit in one of the listners. So, we can 
  just do the logging and exit in handleSessionEstablishedmentError() in 
  KafkaHealthcheck and add a comment in the listener in KafkaController that 
  the actual logic is done in the other listener.
  
  Ditto to the two listeners in the consumer.

Is this issue addressed?


- Jun


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34050/#review83278
---


On May 11, 2015, 8:53 p.m., Parth Brahmbhatt wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/34050/
 ---
 
 (Updated May 11, 2015, 8:53 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-2169
 https://issues.apache.org/jira/browse/KAFKA-2169
 
 
 Repository: kafka
 
 
 Description
 ---
 
 System.exit instead of throwing RuntimeException when zokeeper session 
 establishment fails.
 
 
 Removing the unnecessary @throws.
 
 
 Diffs
 -
 
   build.gradle fef515b3b2276b1f861e7cc2e33e74c3ce5e405b 
   core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
 aa8d9404a3e78a365df06404b79d0d8f694b4bd6 
   core/src/main/scala/kafka/consumer/ZookeeperTopicEventWatcher.scala 
 38f4ec0bd1b388cc8fc04b38bbb2e7aaa1c3f43b 
   core/src/main/scala/kafka/controller/KafkaController.scala 
 a6351163f5b6f080d6fa50bcc3533d445fcbc067 
   core/src/main/scala/kafka/server/KafkaHealthcheck.scala 
 861b7f644941f88ce04a4e95f6b28d18bf1db16d 
 
 Diff: https://reviews.apache.org/r/34050/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Parth Brahmbhatt
 




[jira] [Updated] (KAFKA-2136) Client side protocol changes to return quota delays

2015-05-11 Thread Aditya A Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya A Auradkar updated KAFKA-2136:
-
Attachment: KAFKA-2136_2015-05-11_14:50:56.patch

 Client side protocol changes to return quota delays
 ---

 Key: KAFKA-2136
 URL: https://issues.apache.org/jira/browse/KAFKA-2136
 Project: Kafka
  Issue Type: Sub-task
Reporter: Aditya Auradkar
Assignee: Aditya Auradkar
 Attachments: KAFKA-2136.patch, KAFKA-2136_2015-05-06_18:32:48.patch, 
 KAFKA-2136_2015-05-06_18:35:54.patch, KAFKA-2136_2015-05-11_14:50:56.patch


 As described in KIP-13, evolve the protocol to return a throttle_time_ms in 
 the Fetch and the ProduceResponse objects. Add client side metrics on the new 
 producer and consumer to expose the delay time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2185) Update to Gradle 2.4

2015-05-11 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2185:
---
Assignee: (was: Ismael Juma)

 Update to Gradle 2.4
 

 Key: KAFKA-2185
 URL: https://issues.apache.org/jira/browse/KAFKA-2185
 Project: Kafka
  Issue Type: Improvement
  Components: build
Reporter: Ismael Juma
Priority: Minor
 Attachments: KAFKA-2185.patch


 Gradle 2.4 has been released recently while Kafka is still using Gradle 2.0. 
 There have been a large number of improvements over the various releases 
 (including performance improvements):
 https://gradle.org/docs/2.1/release-notes
 https://gradle.org/docs/2.2/release-notes
 https://gradle.org/docs/2.3/release-notes
 http://gradle.org/docs/current/release-notes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2185) Update to Gradle 2.4

2015-05-11 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2185:
---
Attachment: KAFKA-2185.patch

 Update to Gradle 2.4
 

 Key: KAFKA-2185
 URL: https://issues.apache.org/jira/browse/KAFKA-2185
 Project: Kafka
  Issue Type: Improvement
  Components: build
Reporter: Ismael Juma
Assignee: Ismael Juma
Priority: Minor
 Attachments: KAFKA-2185.patch


 Gradle 2.4 has been released recently while Kafka is still using Gradle 2.0. 
 There have been a large number of improvements over the various releases 
 (including performance improvements):
 https://gradle.org/docs/2.1/release-notes
 https://gradle.org/docs/2.2/release-notes
 https://gradle.org/docs/2.3/release-notes
 http://gradle.org/docs/current/release-notes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2185) Update to Gradle 2.4

2015-05-11 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538544#comment-14538544
 ] 

Ismael Juma commented on KAFKA-2185:


Updated reviewboard https://reviews.apache.org/r/34056/diff/
 against branch upstream/trunk

 Update to Gradle 2.4
 

 Key: KAFKA-2185
 URL: https://issues.apache.org/jira/browse/KAFKA-2185
 Project: Kafka
  Issue Type: Improvement
  Components: build
Reporter: Ismael Juma
Priority: Minor
 Attachments: KAFKA-2185_2015-05-11_20:55:08.patch


 Gradle 2.4 has been released recently while Kafka is still using Gradle 2.0. 
 There have been a large number of improvements over the various releases 
 (including performance improvements):
 https://gradle.org/docs/2.1/release-notes
 https://gradle.org/docs/2.2/release-notes
 https://gradle.org/docs/2.3/release-notes
 http://gradle.org/docs/current/release-notes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 34056: Patch for KAFKA-2185

2015-05-11 Thread Ismael Juma

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34056/
---

(Updated May 11, 2015, 8:02 p.m.)


Review request for kafka.


Bugs: KAFKA-2185
https://issues.apache.org/jira/browse/KAFKA-2185


Repository: kafka


Description
---

Update gradle to 2.4


Diffs
-

  build.gradle fef515b3b2276b1f861e7cc2e33e74c3ce5e405b 

Diff: https://reviews.apache.org/r/34056/diff/


Testing (updated)
---

Rebuilt the gradle wrapper via `gradle` and then ran various build commands 
like:

- ./gradlew releaseTarGz
- ./gradlew jarAll
- ./gradlew test
- ./gradlew -PscalaVersion=2.11.6 test


Thanks,

Ismael Juma



Re: [DISCUSS] KIP-21 Configuration Management

2015-05-11 Thread Gwen Shapira
What Todd said :)

(I think my ops background is showing...)

On Mon, May 11, 2015 at 10:17 PM, Todd Palino tpal...@gmail.com wrote:

 I understand your point here, Jay, but I disagree that we can't have two
 configuration systems. We have two different types of configuration
 information. We have configuration that relates to the service itself (the
 Kafka broker), and we have configuration that relates to the content within
 the service (topics). I would put the client configuration (quotas) in the
 with the second part, as it is dynamic information. I just don't see a good
 argument for effectively degrading the configuration for the service
 because of trying to keep it paired with the configuration of dynamic
 resources.

 -Todd

 On Mon, May 11, 2015 at 11:33 AM, Jay Kreps jay.kr...@gmail.com wrote:

  I totally agree that ZK is not in-and-of-itself a configuration
 management
  solution and it would be better if we could just keep all our config in
  files. Anyone who has followed the various config discussions over the
 past
  few years of discussion knows I'm the biggest proponent of immutable
  file-driven config.
 
  The analogy to normal unix services isn't actually quite right though.
  The problem Kafka has is that a number of the configurable entities it
  manages are added dynamically--topics, clients, consumer groups, etc.
 What
  this actually resembles is not a unix services like HTTPD but a database,
  and databases typically do manage config dynamically for exactly the same
  reason.
 
  The last few emails are arguing that files  ZK as a config solution. I
  agree with this, but that isn't really the question, right?The reality is
  that we need to be able to configure dynamically created entities and we
  won't get a satisfactory solution to that using files (e.g. rsync is not
 an
  acceptable topic creation mechanism). What we are discussing is having a
  single config mechanism or multiple. If we have multiple you need to
 solve
  the whole config lifecycle problem for both--management, audit, rollback,
  etc.
 
  Gwen, you were saying we couldn't get rid of the configuration file, not
  sure if I understand. Is that because we need to give the URL for ZK?
  Wouldn't the same argument work to say that we can't use configuration
  files because we have to specify the file path? I think we can just give
  the server the same --zookeeper argument we use everywhere else, right?
 
  -Jay
 
  On Sun, May 10, 2015 at 11:28 AM, Todd Palino tpal...@gmail.com wrote:
 
   I've been watching this discussion for a while, and I have to jump in
 and
   side with Gwen here. I see no benefit to putting the configs into
  Zookeeper
   entirely, and a lot of downside. The two biggest problems I have with
  this
   are:
  
   1) Configuration management. OK, so you can write glue for Chef to put
   configs into Zookeeper. You also need to write glue for Puppet. And
   Cfengine. And everything else out there. Files are an industry standard
   practice, they're how just about everyone handles it, and there's
 reasons
   for that, not just it's the way it's always been done.
  
   2) Auditing. Configuration files can easily be managed in a source
   repository system which tracks what changes were made and who made
 them.
  It
   also easily allows for rolling back to a previous version. Zookeeper
 does
   not.
  
   I see absolutely nothing wrong with putting the quota (client) configs
  and
   the topic config overrides in Zookeeper, and keeping everything else
   exactly where it is, in the configuration file. To handle
 configurations
   for the broker that can be changed at runtime without a restart, you
 can
   use the industry standard practice of catching SIGHUP and rereading the
   configuration file at that point.
  
   -Todd
  
  
   On Sun, May 10, 2015 at 4:00 AM, Gwen Shapira gshap...@cloudera.com
   wrote:
  
I am still not clear about the benefits of managing configuration in
ZooKeeper vs. keeping the local file and adding a refresh mechanism
(signal, protocol, zookeeper, or other).
   
Benefits of staying with configuration file:
1. In line with pretty much any Linux service that exists, so admins
   have a
lot of related experience.
2. Much smaller change to our code-base, so easier to patch, review
 and
test. Lower risk overall.
   
Can you walk me over the benefits of using Zookeeper? Especially
 since
  it
looks like we can't get rid of the file entirely?
   
Gwen
   
On Thu, May 7, 2015 at 3:33 AM, Jun Rao j...@confluent.io wrote:
   
 One of the Chef users confirmed that Chef integration could still
  work
   if
 all configs are moved to ZK. My rough understanding of how Chef
 works
   is
 that a user first registers a service host with a Chef server.
 After
that,
 a Chef client will be run on the service host. The user can then
 push
 config changes intended for a service/host to the Chef server. The
   

Re: [DISCUSS] KIP-21 Configuration Management

2015-05-11 Thread Gwen Shapira
Hi Jay,

I don't say we can't get rid of configuration file, I believe we can - it
is just a lot of work and not a good idea IMO.

I think the analogy to normal unix services stands. MySQL and Postgres
use configuration files.

I think there two topics here:
1. Configuring dynamically created entities - topics, clients, etc. Topic
config is managed in ZK now, right? And we can do the same for clients, I
guess. Is this what was are discussing here?
2. Dynamic configuration of the broker itself - I think it makes more sense
to add a refresh from file mechanism and use puppet to manage broker
configuration (like normal services). I don't think we have any example of
that kind of configuration yet, right?

Gwen


On Mon, May 11, 2015 at 9:33 PM, Jay Kreps jay.kr...@gmail.com wrote:

 I totally agree that ZK is not in-and-of-itself a configuration management
 solution and it would be better if we could just keep all our config in
 files. Anyone who has followed the various config discussions over the past
 few years of discussion knows I'm the biggest proponent of immutable
 file-driven config.

 The analogy to normal unix services isn't actually quite right though.
 The problem Kafka has is that a number of the configurable entities it
 manages are added dynamically--topics, clients, consumer groups, etc. What
 this actually resembles is not a unix services like HTTPD but a database,
 and databases typically do manage config dynamically for exactly the same
 reason.

 The last few emails are arguing that files  ZK as a config solution. I
 agree with this, but that isn't really the question, right?The reality is
 that we need to be able to configure dynamically created entities and we
 won't get a satisfactory solution to that using files (e.g. rsync is not an
 acceptable topic creation mechanism). What we are discussing is having a
 single config mechanism or multiple. If we have multiple you need to solve
 the whole config lifecycle problem for both--management, audit, rollback,
 etc.

 Gwen, you were saying we couldn't get rid of the configuration file, not
 sure if I understand. Is that because we need to give the URL for ZK?
 Wouldn't the same argument work to say that we can't use configuration
 files because we have to specify the file path? I think we can just give
 the server the same --zookeeper argument we use everywhere else, right?

 -Jay

 On Sun, May 10, 2015 at 11:28 AM, Todd Palino tpal...@gmail.com wrote:

  I've been watching this discussion for a while, and I have to jump in and
  side with Gwen here. I see no benefit to putting the configs into
 Zookeeper
  entirely, and a lot of downside. The two biggest problems I have with
 this
  are:
 
  1) Configuration management. OK, so you can write glue for Chef to put
  configs into Zookeeper. You also need to write glue for Puppet. And
  Cfengine. And everything else out there. Files are an industry standard
  practice, they're how just about everyone handles it, and there's reasons
  for that, not just it's the way it's always been done.
 
  2) Auditing. Configuration files can easily be managed in a source
  repository system which tracks what changes were made and who made them.
 It
  also easily allows for rolling back to a previous version. Zookeeper does
  not.
 
  I see absolutely nothing wrong with putting the quota (client) configs
 and
  the topic config overrides in Zookeeper, and keeping everything else
  exactly where it is, in the configuration file. To handle configurations
  for the broker that can be changed at runtime without a restart, you can
  use the industry standard practice of catching SIGHUP and rereading the
  configuration file at that point.
 
  -Todd
 
 
  On Sun, May 10, 2015 at 4:00 AM, Gwen Shapira gshap...@cloudera.com
  wrote:
 
   I am still not clear about the benefits of managing configuration in
   ZooKeeper vs. keeping the local file and adding a refresh mechanism
   (signal, protocol, zookeeper, or other).
  
   Benefits of staying with configuration file:
   1. In line with pretty much any Linux service that exists, so admins
  have a
   lot of related experience.
   2. Much smaller change to our code-base, so easier to patch, review and
   test. Lower risk overall.
  
   Can you walk me over the benefits of using Zookeeper? Especially since
 it
   looks like we can't get rid of the file entirely?
  
   Gwen
  
   On Thu, May 7, 2015 at 3:33 AM, Jun Rao j...@confluent.io wrote:
  
One of the Chef users confirmed that Chef integration could still
 work
  if
all configs are moved to ZK. My rough understanding of how Chef works
  is
that a user first registers a service host with a Chef server. After
   that,
a Chef client will be run on the service host. The user can then push
config changes intended for a service/host to the Chef server. The
  server
is then responsible for pushing the changes to Chef clients. Chef
  clients
support pluggable logic. For example, it can 

Re: Review Request 34050: Patch for KAFKA-2169

2015-05-11 Thread Jun Rao

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34050/#review83278
---


Thanks for the patch. A couple of comments blow.


core/src/main/scala/kafka/consumer/ZookeeperTopicEventWatcher.scala
https://reviews.apache.org/r/34050/#comment134178

This method throws exception instead of RuntimeException.



core/src/main/scala/kafka/controller/KafkaController.scala
https://reviews.apache.org/r/34050/#comment134183

We register two StateListeners on the same zkclient instance in the broker. 
If we can't establish a new ZK session, both listeners will be called. However, 
we only need to exit in one of the listners. So, we can just do the logging and 
exit in handleSessionEstablishedmentError() in KafkaHealthcheck and add a 
comment in the listener in KafkaController that the actual logic is done in the 
other listener.

Ditto to the two listeners in the consumer.


- Jun Rao


On May 11, 2015, 6:34 p.m., Parth Brahmbhatt wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/34050/
 ---
 
 (Updated May 11, 2015, 6:34 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-2169
 https://issues.apache.org/jira/browse/KAFKA-2169
 
 
 Repository: kafka
 
 
 Description
 ---
 
 System.exit instead of throwing RuntimeException when zokeeper session 
 establishment fails.
 
 
 Diffs
 -
 
   build.gradle fef515b3b2276b1f861e7cc2e33e74c3ce5e405b 
   core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
 aa8d9404a3e78a365df06404b79d0d8f694b4bd6 
   core/src/main/scala/kafka/consumer/ZookeeperTopicEventWatcher.scala 
 38f4ec0bd1b388cc8fc04b38bbb2e7aaa1c3f43b 
   core/src/main/scala/kafka/controller/KafkaController.scala 
 a6351163f5b6f080d6fa50bcc3533d445fcbc067 
   core/src/main/scala/kafka/server/KafkaHealthcheck.scala 
 861b7f644941f88ce04a4e95f6b28d18bf1db16d 
 
 Diff: https://reviews.apache.org/r/34050/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Parth Brahmbhatt
 




Re: Review Request 34050: Patch for KAFKA-2169

2015-05-11 Thread Parth Brahmbhatt

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34050/
---

(Updated May 11, 2015, 8:53 p.m.)


Review request for kafka.


Bugs: KAFKA-2169
https://issues.apache.org/jira/browse/KAFKA-2169


Repository: kafka


Description (updated)
---

System.exit instead of throwing RuntimeException when zokeeper session 
establishment fails.


Removing the unnecessary @throws.


Diffs (updated)
-

  build.gradle fef515b3b2276b1f861e7cc2e33e74c3ce5e405b 
  core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
aa8d9404a3e78a365df06404b79d0d8f694b4bd6 
  core/src/main/scala/kafka/consumer/ZookeeperTopicEventWatcher.scala 
38f4ec0bd1b388cc8fc04b38bbb2e7aaa1c3f43b 
  core/src/main/scala/kafka/controller/KafkaController.scala 
a6351163f5b6f080d6fa50bcc3533d445fcbc067 
  core/src/main/scala/kafka/server/KafkaHealthcheck.scala 
861b7f644941f88ce04a4e95f6b28d18bf1db16d 

Diff: https://reviews.apache.org/r/34050/diff/


Testing
---


Thanks,

Parth Brahmbhatt



[jira] [Updated] (KAFKA-2169) Upgrade to zkclient-0.5

2015-05-11 Thread Parth Brahmbhatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Brahmbhatt updated KAFKA-2169:

Attachment: KAFKA-2169_2015-05-11_13:52:57.patch

 Upgrade to zkclient-0.5
 ---

 Key: KAFKA-2169
 URL: https://issues.apache.org/jira/browse/KAFKA-2169
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.2.0
Reporter: Neha Narkhede
Assignee: Parth Brahmbhatt
Priority: Critical
 Attachments: KAFKA-2169.patch, KAFKA-2169.patch, 
 KAFKA-2169_2015-05-11_13:52:57.patch


 zkclient-0.5 is released 
 http://mvnrepository.com/artifact/com.101tec/zkclient/0.5 and has the fix for 
 KAFKA-824



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2185) Update to Gradle 2.4

2015-05-11 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2185:
---
Attachment: KAFKA-2185_2015-05-11_20:55:08.patch

 Update to Gradle 2.4
 

 Key: KAFKA-2185
 URL: https://issues.apache.org/jira/browse/KAFKA-2185
 Project: Kafka
  Issue Type: Improvement
  Components: build
Reporter: Ismael Juma
Priority: Minor
 Attachments: KAFKA-2185_2015-05-11_20:55:08.patch


 Gradle 2.4 has been released recently while Kafka is still using Gradle 2.0. 
 There have been a large number of improvements over the various releases 
 (including performance improvements):
 https://gradle.org/docs/2.1/release-notes
 https://gradle.org/docs/2.2/release-notes
 https://gradle.org/docs/2.3/release-notes
 http://gradle.org/docs/current/release-notes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 34056: Patch for KAFKA-2185

2015-05-11 Thread Ismael Juma

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34056/
---

(Updated May 11, 2015, 7:55 p.m.)


Review request for kafka.


Bugs: KAFKA-2185
https://issues.apache.org/jira/browse/KAFKA-2185


Repository: kafka


Description
---

Update gradle to 2.4


Diffs (updated)
-

  build.gradle fef515b3b2276b1f861e7cc2e33e74c3ce5e405b 

Diff: https://reviews.apache.org/r/34056/diff/


Testing
---


Thanks,

Ismael Juma



[jira] [Updated] (KAFKA-2185) Update to Gradle 2.4

2015-05-11 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2185:
---
Attachment: (was: KAFKA-2185.patch)

 Update to Gradle 2.4
 

 Key: KAFKA-2185
 URL: https://issues.apache.org/jira/browse/KAFKA-2185
 Project: Kafka
  Issue Type: Improvement
  Components: build
Reporter: Ismael Juma
Priority: Minor
 Attachments: KAFKA-2185_2015-05-11_20:55:08.patch


 Gradle 2.4 has been released recently while Kafka is still using Gradle 2.0. 
 There have been a large number of improvements over the various releases 
 (including performance improvements):
 https://gradle.org/docs/2.1/release-notes
 https://gradle.org/docs/2.2/release-notes
 https://gradle.org/docs/2.3/release-notes
 http://gradle.org/docs/current/release-notes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2169) Upgrade to zkclient-0.5

2015-05-11 Thread Parth Brahmbhatt (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538629#comment-14538629
 ] 

Parth Brahmbhatt commented on KAFKA-2169:
-

Updated reviewboard https://reviews.apache.org/r/34050/diff/
 against branch origin/trunk

 Upgrade to zkclient-0.5
 ---

 Key: KAFKA-2169
 URL: https://issues.apache.org/jira/browse/KAFKA-2169
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.2.0
Reporter: Neha Narkhede
Assignee: Parth Brahmbhatt
Priority: Critical
 Attachments: KAFKA-2169.patch, KAFKA-2169.patch, 
 KAFKA-2169_2015-05-11_13:52:57.patch


 zkclient-0.5 is released 
 http://mvnrepository.com/artifact/com.101tec/zkclient/0.5 and has the fix for 
 KAFKA-824



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2185) Update to Gradle 2.4

2015-05-11 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-2185:
--

 Summary: Update to Gradle 2.4
 Key: KAFKA-2185
 URL: https://issues.apache.org/jira/browse/KAFKA-2185
 Project: Kafka
  Issue Type: Improvement
  Components: build
Reporter: Ismael Juma
Assignee: Ismael Juma
Priority: Minor


Gradle 2.4 has been released recently while Kafka is still using Gradle 2.0. 
There have been a large number of improvements over the various releases 
(including performance improvements):

https://gradle.org/docs/2.1/release-notes
https://gradle.org/docs/2.2/release-notes
https://gradle.org/docs/2.3/release-notes
http://gradle.org/docs/current/release-notes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2185) Update to Gradle 2.4

2015-05-11 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538537#comment-14538537
 ] 

Ismael Juma commented on KAFKA-2185:


Created reviewboard https://reviews.apache.org/r/34056/diff/
 against branch upstream/trunk

 Update to Gradle 2.4
 

 Key: KAFKA-2185
 URL: https://issues.apache.org/jira/browse/KAFKA-2185
 Project: Kafka
  Issue Type: Improvement
  Components: build
Reporter: Ismael Juma
Assignee: Ismael Juma
Priority: Minor
 Attachments: KAFKA-2185.patch


 Gradle 2.4 has been released recently while Kafka is still using Gradle 2.0. 
 There have been a large number of improvements over the various releases 
 (including performance improvements):
 https://gradle.org/docs/2.1/release-notes
 https://gradle.org/docs/2.2/release-notes
 https://gradle.org/docs/2.3/release-notes
 http://gradle.org/docs/current/release-notes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2185) Update to Gradle 2.4

2015-05-11 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2185:
---
Status: Patch Available  (was: Open)

 Update to Gradle 2.4
 

 Key: KAFKA-2185
 URL: https://issues.apache.org/jira/browse/KAFKA-2185
 Project: Kafka
  Issue Type: Improvement
  Components: build
Reporter: Ismael Juma
Assignee: Ismael Juma
Priority: Minor
 Attachments: KAFKA-2185.patch


 Gradle 2.4 has been released recently while Kafka is still using Gradle 2.0. 
 There have been a large number of improvements over the various releases 
 (including performance improvements):
 https://gradle.org/docs/2.1/release-notes
 https://gradle.org/docs/2.2/release-notes
 https://gradle.org/docs/2.3/release-notes
 http://gradle.org/docs/current/release-notes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 34056: Patch for KAFKA-2185

2015-05-11 Thread Ismael Juma

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34056/
---

Review request for kafka.


Bugs: KAFKA-2185
https://issues.apache.org/jira/browse/KAFKA-2185


Repository: kafka


Description
---

Update gradle to 2.4


Diffs
-

  build.gradle fef515b3b2276b1f861e7cc2e33e74c3ce5e405b 

Diff: https://reviews.apache.org/r/34056/diff/


Testing
---


Thanks,

Ismael Juma



[jira] [Commented] (KAFKA-2169) Upgrade to zkclient-0.5

2015-05-11 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538592#comment-14538592
 ] 

Jun Rao commented on KAFKA-2169:


1) By api compatibility, I meant the following. Let's say an application uses a 
third party library that includes a Kafka consumer. Let's say that the third 
party library is built with Kafka 0.8.2 jars. If the api is compatible, the 
application can upgrade to Kafka 0.8.3 with the same third party library w/o 
forcing it to recompile. To test this out, you can get a Kafka 0.8.2 binary 
release, replace everything in libs with the jars in a Kafka 0.8.3 binary 
release (in particular, the new zkclient jar) and see if console consumer in 
Kafka 0.8.2 still works.
3) Commented on the RB. 

 Upgrade to zkclient-0.5
 ---

 Key: KAFKA-2169
 URL: https://issues.apache.org/jira/browse/KAFKA-2169
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.2.0
Reporter: Neha Narkhede
Assignee: Parth Brahmbhatt
Priority: Critical
 Attachments: KAFKA-2169.patch, KAFKA-2169.patch


 zkclient-0.5 is released 
 http://mvnrepository.com/artifact/com.101tec/zkclient/0.5 and has the fix for 
 KAFKA-824



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2186) Follow-up patch of KAFKA-1650

2015-05-11 Thread Jiangjie Qin (JIRA)
Jiangjie Qin created KAFKA-2186:
---

 Summary: Follow-up patch of KAFKA-1650
 Key: KAFKA-2186
 URL: https://issues.apache.org/jira/browse/KAFKA-2186
 Project: Kafka
  Issue Type: Bug
Reporter: Jiangjie Qin
Assignee: Jiangjie Qin


Offsets commit with a map was added in KAFKA-1650. It should be added to 
consumer connector java API also.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


RE: [DISCUSS] KIP-21 Configuration Management

2015-05-11 Thread Aditya Auradkar
I did initially think having everything in ZK was better than having the 
dichotomy Joel referred to primarily because all kafka configs can be managed 
consistently.

I guess the biggest disadvantage of driving broker config primarily from ZK is 
that it requires everyone to manage Kafka configuration separately from other 
services. Several people have separately mentioned integration issues with 
systems like Puppet and Chef. While they may support pluggable logic, it does 
require everyone to write that additional piece of logic specific to Kafka. We 
will have to implement group, fabric, tag hierarchy (as Ashish mentioned), 
auditing and ACL management. While this potential consistency is nice, perhaps 
the tradeoff isn't worth it given that the resulting system isn't much superior 
to pushing out new config files and is also quite disruptive. Since this 
impacts operations teams the most, I also think their input is probably the 
most valuable and should perhaps drive the outcome.

I also think it is fine to treat topic and client configuration separately 
because they are more like metadata than actual service configuration. 

Aditya

From: Joel Koshy [jjkosh...@gmail.com]
Sent: Monday, May 11, 2015 4:54 PM
To: dev@kafka.apache.org
Subject: Re: [DISCUSS] KIP-21 Configuration Management

So the general concern here is the dichotomy of configs (which we
already have - i.e., in the form of broker config files vs topic
configs in zookeeper). We (at LinkedIn) had some discussions on this
last week and had this very question for the operations team whose
opinion is I think to a large degree a touchstone for this decision:
Has the operations team at LinkedIn experienced any pain so far with
managing topic configs in ZooKeeper (while broker configs are
file-based)? It turns out that ops overwhelmingly favors the current
approach. i.e., service configs as file-based configs and client/topic
configs in ZooKeeper is intuitive and works great. This may be
somewhat counter-intuitive to devs, but this is one of those decisions
for which ops input is very critical - because for all practical
purposes, they are the users in this discussion.

If we continue with this dichotomy and need to support dynamic config
for client/topic configs as well as select service configs then there
will need to be dichotomy in the config change mechanism as well.
i.e., client/topic configs will change via (say) a ZooKeeper watch and
the service config will change via a config file re-read (on SIGHUP)
after config changes have been pushed out to local files. Is this a
bad thing? Personally, I don't think it is - i.e. I'm in favor of this
approach. What do others think?

Thanks,

Joel

On Mon, May 11, 2015 at 11:08:44PM +0300, Gwen Shapira wrote:
 What Todd said :)

 (I think my ops background is showing...)

 On Mon, May 11, 2015 at 10:17 PM, Todd Palino tpal...@gmail.com wrote:

  I understand your point here, Jay, but I disagree that we can't have two
  configuration systems. We have two different types of configuration
  information. We have configuration that relates to the service itself (the
  Kafka broker), and we have configuration that relates to the content within
  the service (topics). I would put the client configuration (quotas) in the
  with the second part, as it is dynamic information. I just don't see a good
  argument for effectively degrading the configuration for the service
  because of trying to keep it paired with the configuration of dynamic
  resources.
 
  -Todd
 
  On Mon, May 11, 2015 at 11:33 AM, Jay Kreps jay.kr...@gmail.com wrote:
 
   I totally agree that ZK is not in-and-of-itself a configuration
  management
   solution and it would be better if we could just keep all our config in
   files. Anyone who has followed the various config discussions over the
  past
   few years of discussion knows I'm the biggest proponent of immutable
   file-driven config.
  
   The analogy to normal unix services isn't actually quite right though.
   The problem Kafka has is that a number of the configurable entities it
   manages are added dynamically--topics, clients, consumer groups, etc.
  What
   this actually resembles is not a unix services like HTTPD but a database,
   and databases typically do manage config dynamically for exactly the same
   reason.
  
   The last few emails are arguing that files  ZK as a config solution. I
   agree with this, but that isn't really the question, right?The reality is
   that we need to be able to configure dynamically created entities and we
   won't get a satisfactory solution to that using files (e.g. rsync is not
  an
   acceptable topic creation mechanism). What we are discussing is having a
   single config mechanism or multiple. If we have multiple you need to
  solve
   the whole config lifecycle problem for both--management, audit, rollback,
   etc.
  
   Gwen, you were saying we couldn't get rid of the configuration file, not

Re: Review Request 33378: Patch for KAFKA-2136

2015-05-11 Thread Aditya Auradkar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33378/
---

(Updated May 11, 2015, 9:51 p.m.)


Review request for kafka, Joel Koshy and Jun Rao.


Bugs: KAFKA-2136
https://issues.apache.org/jira/browse/KAFKA-2136


Repository: kafka


Description (updated)
---

Fixing bug


Diffs (updated)
-

  
clients/src/main/java/org/apache/kafka/clients/consumer/internals/Fetcher.java 
ef9dd5238fbc771496029866ece1d85db6d7b7a5 
  clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java 
b2db91ca14bbd17fef5ce85839679144fff3f689 
  clients/src/main/java/org/apache/kafka/common/protocol/Protocol.java 
3dc8b015afd2347a41c9a9dbc02b8e367da5f75f 
  clients/src/main/java/org/apache/kafka/common/requests/FetchRequest.java 
8686d83aa52e435c6adafbe9ff4bd1602281072a 
  clients/src/main/java/org/apache/kafka/common/requests/FetchResponse.java 
eb8951fba48c335095cc43fc3672de1c733e07ff 
  clients/src/main/java/org/apache/kafka/common/requests/ProduceRequest.java 
fabeae3083a8ea55cdacbb9568f3847ccd85bab4 
  clients/src/main/java/org/apache/kafka/common/requests/ProduceResponse.java 
37ec0b79beafcf5735c386b066eb319fb697eff5 
  
clients/src/test/java/org/apache/kafka/clients/consumer/internals/FetcherTest.java
 419541011d652becf0cda7a5e62ce813cddb1732 
  
clients/src/test/java/org/apache/kafka/clients/producer/internals/SenderTest.java
 8b1805d3d2bcb9fe2bacb37d870c3236aa9532c4 
  
clients/src/test/java/org/apache/kafka/common/requests/RequestResponseTest.java 
e3cc1967e407b64cc734548c19e30de700b64ba8 
  core/src/main/scala/kafka/api/FetchRequest.scala 
b038c15186c0cbcc65b59479324052498361b717 
  core/src/main/scala/kafka/api/FetchResponse.scala 
75aaf57fb76ec01660d93701a57ae953d877d81c 
  core/src/main/scala/kafka/api/ProducerRequest.scala 
570b2da1d865086f9830aa919a49063abbbe574d 
  core/src/main/scala/kafka/api/ProducerResponse.scala 
5d1fac4cb8943f5bfaa487f8e9d9d2856efbd330 
  core/src/main/scala/kafka/consumer/SimpleConsumer.scala 
31a2639477bf66f9a05d2b9b07794572d7ec393b 
  core/src/main/scala/kafka/server/AbstractFetcherThread.scala 
a439046e118b6efcc3a5a9d9e8acb79f85e40398 
  core/src/main/scala/kafka/server/DelayedFetch.scala 
de6cf5bdaa0e70394162febc63b50b55ca0a92db 
  core/src/main/scala/kafka/server/DelayedProduce.scala 
05078b24ef28f2f4e099afa943e43f1d00359fda 
  core/src/main/scala/kafka/server/KafkaApis.scala 
417960dd1ab407ebebad8fdb0e97415db3e91a2f 
  core/src/main/scala/kafka/server/OffsetManager.scala 
18680ce100f10035175cc0263ba7787ab0f6a17a 
  core/src/main/scala/kafka/server/ReplicaFetcherThread.scala 
b31b432a226ba79546dd22ef1d2acbb439c2e9a3 
  core/src/main/scala/kafka/server/ReplicaManager.scala 
59c9bc3ac3a8afc07a6f8c88c5871304db588d17 
  core/src/test/scala/unit/kafka/api/RequestResponseSerializationTest.scala 
5717165f2344823fabe8f7cfafae4bb8af2d949a 
  core/src/test/scala/unit/kafka/server/DelayedOperationTest.scala 
f3ab3f4ff8eb1aa6b2ab87ba75f72eceb6649620 
  core/src/test/scala/unit/kafka/server/ReplicaManagerTest.scala 
00d59337a99ac135e8689bd1ecd928f7b1423d79 

Diff: https://reviews.apache.org/r/33378/diff/


Testing
---

New tests added


Thanks,

Aditya Auradkar



[jira] [Commented] (KAFKA-2136) Client side protocol changes to return quota delays

2015-05-11 Thread Aditya A Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538703#comment-14538703
 ] 

Aditya A Auradkar commented on KAFKA-2136:
--

Updated reviewboard https://reviews.apache.org/r/33378/diff/
 against branch origin/trunk

 Client side protocol changes to return quota delays
 ---

 Key: KAFKA-2136
 URL: https://issues.apache.org/jira/browse/KAFKA-2136
 Project: Kafka
  Issue Type: Sub-task
Reporter: Aditya Auradkar
Assignee: Aditya Auradkar
 Attachments: KAFKA-2136.patch, KAFKA-2136_2015-05-06_18:32:48.patch, 
 KAFKA-2136_2015-05-06_18:35:54.patch, KAFKA-2136_2015-05-11_14:50:56.patch


 As described in KIP-13, evolve the protocol to return a throttle_time_ms in 
 the Fetch and the ProduceResponse objects. Add client side metrics on the new 
 producer and consumer to expose the delay time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2186) Follow-up patch of KAFKA-1650

2015-05-11 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2186:
---
Reviewer: Joel Koshy

 Follow-up patch of KAFKA-1650
 -

 Key: KAFKA-2186
 URL: https://issues.apache.org/jira/browse/KAFKA-2186
 Project: Kafka
  Issue Type: Bug
Reporter: Jiangjie Qin
Assignee: Jiangjie Qin
 Attachments: KAFKA-2186.patch


 Offsets commit with a map was added in KAFKA-1650. It should be added to 
 consumer connector java API also.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2169) Upgrade to zkclient-0.5

2015-05-11 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2169:
---
Reviewer: Jun Rao

 Upgrade to zkclient-0.5
 ---

 Key: KAFKA-2169
 URL: https://issues.apache.org/jira/browse/KAFKA-2169
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.2.0
Reporter: Neha Narkhede
Assignee: Parth Brahmbhatt
Priority: Critical
 Attachments: KAFKA-2169.patch, KAFKA-2169.patch, 
 KAFKA-2169_2015-05-11_13:52:57.patch


 zkclient-0.5 is released 
 http://mvnrepository.com/artifact/com.101tec/zkclient/0.5 and has the fix for 
 KAFKA-824



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2185) Update to Gradle 2.4

2015-05-11 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2185:
---
Reviewer: Jun Rao
Assignee: Ismael Juma

 Update to Gradle 2.4
 

 Key: KAFKA-2185
 URL: https://issues.apache.org/jira/browse/KAFKA-2185
 Project: Kafka
  Issue Type: Improvement
  Components: build
Reporter: Ismael Juma
Assignee: Ismael Juma
Priority: Minor
 Attachments: KAFKA-2185_2015-05-11_20:55:08.patch


 Gradle 2.4 has been released recently while Kafka is still using Gradle 2.0. 
 There have been a large number of improvements over the various releases 
 (including performance improvements):
 https://gradle.org/docs/2.1/release-notes
 https://gradle.org/docs/2.2/release-notes
 https://gradle.org/docs/2.3/release-notes
 http://gradle.org/docs/current/release-notes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2132) Move Log4J appender to clients module

2015-05-11 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2132:
---
Reviewer: Jay Kreps  (was: Gwen Shapira)

 Move Log4J appender to clients module
 -

 Key: KAFKA-2132
 URL: https://issues.apache.org/jira/browse/KAFKA-2132
 Project: Kafka
  Issue Type: Improvement
Reporter: Gwen Shapira
Assignee: Ashish K Singh
 Attachments: KAFKA-2132.patch, KAFKA-2132_2015-04-27_19:59:46.patch, 
 KAFKA-2132_2015-04-30_12:22:02.patch, KAFKA-2132_2015-04-30_15:53:17.patch


 Log4j appender is just a producer.
 Since we have a new producer in the clients module, no need to keep Log4J 
 appender in core and force people to package all of Kafka with their apps.
 Lets move the Log4jAppender to clients module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2163) Offsets manager cache should prevent stale-offset-cleanup while an offset load is in progress; otherwise we can lose consumer offsets

2015-05-11 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2163:
---
Reviewer: Jun Rao

 Offsets manager cache should prevent stale-offset-cleanup while an offset 
 load is in progress; otherwise we can lose consumer offsets
 -

 Key: KAFKA-2163
 URL: https://issues.apache.org/jira/browse/KAFKA-2163
 Project: Kafka
  Issue Type: Bug
Reporter: Joel Koshy
Assignee: Joel Koshy
 Fix For: 0.8.3

 Attachments: KAFKA-2163.patch


 When leadership of an offsets partition moves, the new leader loads offsets 
 from that partition into the offset manager cache.
 Independently, the offset manager has a periodic cleanup task for stale 
 offsets that removes old offsets from the cache and appends tombstones for 
 those. If the partition happens to contain much older offsets (earlier in the 
 log) and inserts those into the cache; the cleanup task may run and see those 
 offsets (which it deems to be stale) and proceeds to remove from the cache 
 and append a tombstone to the end of the log. The tombstone will override the 
 true latest offset and a subsequent offset fetch request will return no 
 offset.
 We just need to prevent the cleanup task from running during an offset load.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2176) DefaultPartitioner doesn't perform consistent hashing based on

2015-05-11 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2176:
---
Reviewer: Guozhang Wang

 DefaultPartitioner doesn't perform consistent hashing based on 
 ---

 Key: KAFKA-2176
 URL: https://issues.apache.org/jira/browse/KAFKA-2176
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.8.1
Reporter: Igor Maravić
  Labels: easyfix, newbie
 Fix For: 0.8.1

 Attachments: KAFKA-2176.patch


 While deploying MirrorMakers in production, we configured it to use 
 kafka.producer.DefaultPartitioner. By doing this and since we had the same 
 amount partitions for the topic in local and aggregation cluster, we expect 
 that the messages will be partitioned the same way.
 This wasn't the case. Messages were properly partitioned with 
 DefaultPartitioner on our local cluster, since the key was of the type String.
 On the MirrorMaker side, the messages were not properly partitioned.
 Problem is that the Array[Byte] doesn't implement hashCode function, since it 
 is mutable collection.
 Fix is to calculate the deep hash code if the key is of Array type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2178) Loss of highwatermarks on incorrect cluster shutdown/restart

2015-05-11 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2178:
---
Reviewer: Jun Rao

 Loss of highwatermarks on incorrect cluster shutdown/restart
 

 Key: KAFKA-2178
 URL: https://issues.apache.org/jira/browse/KAFKA-2178
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.8.2.1
Reporter: Alexey Ozeritskiy
 Attachments: KAFKA-2178.patch


 ReplicaManager flushes highwatermarks only for partitions which it recieved 
 from Controller.
 If Controller sends incomplete list of partitions then ReplicaManager will 
 write incomplete list of highwatermarks.
 As a result one can lose a lot of data during incorrect broker restart.
 We got this situation in real life on our cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Kafka KIP hangout May 12

2015-05-11 Thread Jun Rao
Hi, Everyone,

We will have a KIP hangout at 11 PST on May 12. The following is the
agenda. If you want to attend and is not on the invite, please let me know.

Agenda:
KIP-11 (authorization): any remaining issues
KIP-12 (sasl/ssl authentication): status check
KIP-19 (Add a request timeout to NetworkClient)
KIP-21 (configuration management)

Thanks,

Jun


[jira] [Commented] (KAFKA-1997) Refactor Mirror Maker

2015-05-11 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538900#comment-14538900
 ] 

Jun Rao commented on KAFKA-1997:


The reasoning is that there is no good default value to set. By making these 
required, we are forcing the users to tell us what they want.

 Refactor Mirror Maker
 -

 Key: KAFKA-1997
 URL: https://issues.apache.org/jira/browse/KAFKA-1997
 Project: Kafka
  Issue Type: Improvement
Reporter: Jiangjie Qin
Assignee: Jiangjie Qin
 Attachments: KAFKA-1997.patch, KAFKA-1997.patch, 
 KAFKA-1997_2015-03-03_16:28:46.patch, KAFKA-1997_2015-03-04_15:07:46.patch, 
 KAFKA-1997_2015-03-04_15:42:45.patch, KAFKA-1997_2015-03-05_20:14:58.patch, 
 KAFKA-1997_2015-03-09_18:55:54.patch, KAFKA-1997_2015-03-10_18:31:34.patch, 
 KAFKA-1997_2015-03-11_15:20:18.patch, KAFKA-1997_2015-03-11_19:10:53.patch, 
 KAFKA-1997_2015-03-13_14:43:34.patch, KAFKA-1997_2015-03-17_13:47:01.patch, 
 KAFKA-1997_2015-03-18_12:47:32.patch


 Refactor mirror maker based on KIP-3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 34047: Patch for KAFKA-2169

2015-05-11 Thread Parth Brahmbhatt

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34047/
---

Review request for kafka.


Bugs: KAFKA-2169
https://issues.apache.org/jira/browse/KAFKA-2169


Repository: kafka


Description
---

KAFKA-2169: Moving to zkClient 0.5 release.


Diffs
-

  build.gradle fef515b3b2276b1f861e7cc2e33e74c3ce5e405b 
  core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
aa8d9404a3e78a365df06404b79d0d8f694b4bd6 
  core/src/main/scala/kafka/consumer/ZookeeperTopicEventWatcher.scala 
38f4ec0bd1b388cc8fc04b38bbb2e7aaa1c3f43b 
  core/src/main/scala/kafka/controller/KafkaController.scala 
a6351163f5b6f080d6fa50bcc3533d445fcbc067 
  core/src/main/scala/kafka/server/KafkaHealthcheck.scala 
861b7f644941f88ce04a4e95f6b28d18bf1db16d 

Diff: https://reviews.apache.org/r/34047/diff/


Testing
---


Thanks,

Parth Brahmbhatt



[jira] [Updated] (KAFKA-2169) Upgrade to zkclient-0.5

2015-05-11 Thread Parth Brahmbhatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Brahmbhatt updated KAFKA-2169:

Attachment: KAFKA-2169.patch

 Upgrade to zkclient-0.5
 ---

 Key: KAFKA-2169
 URL: https://issues.apache.org/jira/browse/KAFKA-2169
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.2.0
Reporter: Neha Narkhede
Assignee: Parth Brahmbhatt
Priority: Critical
 Attachments: KAFKA-2169.patch


 zkclient-0.5 is released 
 http://mvnrepository.com/artifact/com.101tec/zkclient/0.5 and has the fix for 
 KAFKA-824



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2169) Upgrade to zkclient-0.5

2015-05-11 Thread Parth Brahmbhatt (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538313#comment-14538313
 ] 

Parth Brahmbhatt commented on KAFKA-2169:
-

Created reviewboard https://reviews.apache.org/r/34047/diff/
 against branch origin/trunk

 Upgrade to zkclient-0.5
 ---

 Key: KAFKA-2169
 URL: https://issues.apache.org/jira/browse/KAFKA-2169
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.2.0
Reporter: Neha Narkhede
Assignee: Parth Brahmbhatt
Priority: Critical
 Attachments: KAFKA-2169.patch


 zkclient-0.5 is released 
 http://mvnrepository.com/artifact/com.101tec/zkclient/0.5 and has the fix for 
 KAFKA-824



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2169) Upgrade to zkclient-0.5

2015-05-11 Thread Parth Brahmbhatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Brahmbhatt updated KAFKA-2169:

Status: Patch Available  (was: Open)

 Upgrade to zkclient-0.5
 ---

 Key: KAFKA-2169
 URL: https://issues.apache.org/jira/browse/KAFKA-2169
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.2.0
Reporter: Neha Narkhede
Assignee: Parth Brahmbhatt
Priority: Critical
 Attachments: KAFKA-2169.patch


 zkclient-0.5 is released 
 http://mvnrepository.com/artifact/com.101tec/zkclient/0.5 and has the fix for 
 KAFKA-824



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 34050: Patch for KAFKA-2169

2015-05-11 Thread Parth Brahmbhatt

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34050/
---

Review request for kafka.


Bugs: KAFKA-2169
https://issues.apache.org/jira/browse/KAFKA-2169


Repository: kafka


Description
---

System.exit instead of throwing RuntimeException when zokeeper session 
establishment fails.


Diffs
-

  build.gradle fef515b3b2276b1f861e7cc2e33e74c3ce5e405b 
  core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
aa8d9404a3e78a365df06404b79d0d8f694b4bd6 
  core/src/main/scala/kafka/consumer/ZookeeperTopicEventWatcher.scala 
38f4ec0bd1b388cc8fc04b38bbb2e7aaa1c3f43b 
  core/src/main/scala/kafka/controller/KafkaController.scala 
a6351163f5b6f080d6fa50bcc3533d445fcbc067 
  core/src/main/scala/kafka/server/KafkaHealthcheck.scala 
861b7f644941f88ce04a4e95f6b28d18bf1db16d 

Diff: https://reviews.apache.org/r/34050/diff/


Testing
---


Thanks,

Parth Brahmbhatt



[jira] [Commented] (KAFKA-2169) Upgrade to zkclient-0.5

2015-05-11 Thread Parth Brahmbhatt (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538409#comment-14538409
 ] 

Parth Brahmbhatt commented on KAFKA-2169:
-

Posted a review on review board. https://reviews.apache.org/r/34050/diff/
1) I tried console-producer and console-consumer at trunk with only my changes 
applied and it works.
2) I do not disagree with the approach, however that is a change in behavior 
and I was trying to get the upgrade in given its blocking other jiras without 
having to tie that behavior change discussion to this jira. I have modified the 
behavior so it will not do System.exit.
3) Not sure what you mean here , we are handling it as part of 
handleSessionEstablishmentError() in all cases. 

 Upgrade to zkclient-0.5
 ---

 Key: KAFKA-2169
 URL: https://issues.apache.org/jira/browse/KAFKA-2169
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.2.0
Reporter: Neha Narkhede
Assignee: Parth Brahmbhatt
Priority: Critical
 Attachments: KAFKA-2169.patch, KAFKA-2169.patch


 zkclient-0.5 is released 
 http://mvnrepository.com/artifact/com.101tec/zkclient/0.5 and has the fix for 
 KAFKA-824



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-21 Configuration Management

2015-05-11 Thread Joel Koshy
So the general concern here is the dichotomy of configs (which we
already have - i.e., in the form of broker config files vs topic
configs in zookeeper). We (at LinkedIn) had some discussions on this
last week and had this very question for the operations team whose
opinion is I think to a large degree a touchstone for this decision:
Has the operations team at LinkedIn experienced any pain so far with
managing topic configs in ZooKeeper (while broker configs are
file-based)? It turns out that ops overwhelmingly favors the current
approach. i.e., service configs as file-based configs and client/topic
configs in ZooKeeper is intuitive and works great. This may be
somewhat counter-intuitive to devs, but this is one of those decisions
for which ops input is very critical - because for all practical
purposes, they are the users in this discussion.

If we continue with this dichotomy and need to support dynamic config
for client/topic configs as well as select service configs then there
will need to be dichotomy in the config change mechanism as well.
i.e., client/topic configs will change via (say) a ZooKeeper watch and
the service config will change via a config file re-read (on SIGHUP)
after config changes have been pushed out to local files. Is this a
bad thing? Personally, I don't think it is - i.e. I'm in favor of this
approach. What do others think?

Thanks,

Joel

On Mon, May 11, 2015 at 11:08:44PM +0300, Gwen Shapira wrote:
 What Todd said :)
 
 (I think my ops background is showing...)
 
 On Mon, May 11, 2015 at 10:17 PM, Todd Palino tpal...@gmail.com wrote:
 
  I understand your point here, Jay, but I disagree that we can't have two
  configuration systems. We have two different types of configuration
  information. We have configuration that relates to the service itself (the
  Kafka broker), and we have configuration that relates to the content within
  the service (topics). I would put the client configuration (quotas) in the
  with the second part, as it is dynamic information. I just don't see a good
  argument for effectively degrading the configuration for the service
  because of trying to keep it paired with the configuration of dynamic
  resources.
 
  -Todd
 
  On Mon, May 11, 2015 at 11:33 AM, Jay Kreps jay.kr...@gmail.com wrote:
 
   I totally agree that ZK is not in-and-of-itself a configuration
  management
   solution and it would be better if we could just keep all our config in
   files. Anyone who has followed the various config discussions over the
  past
   few years of discussion knows I'm the biggest proponent of immutable
   file-driven config.
  
   The analogy to normal unix services isn't actually quite right though.
   The problem Kafka has is that a number of the configurable entities it
   manages are added dynamically--topics, clients, consumer groups, etc.
  What
   this actually resembles is not a unix services like HTTPD but a database,
   and databases typically do manage config dynamically for exactly the same
   reason.
  
   The last few emails are arguing that files  ZK as a config solution. I
   agree with this, but that isn't really the question, right?The reality is
   that we need to be able to configure dynamically created entities and we
   won't get a satisfactory solution to that using files (e.g. rsync is not
  an
   acceptable topic creation mechanism). What we are discussing is having a
   single config mechanism or multiple. If we have multiple you need to
  solve
   the whole config lifecycle problem for both--management, audit, rollback,
   etc.
  
   Gwen, you were saying we couldn't get rid of the configuration file, not
   sure if I understand. Is that because we need to give the URL for ZK?
   Wouldn't the same argument work to say that we can't use configuration
   files because we have to specify the file path? I think we can just give
   the server the same --zookeeper argument we use everywhere else, right?
  
   -Jay
  
   On Sun, May 10, 2015 at 11:28 AM, Todd Palino tpal...@gmail.com wrote:
  
I've been watching this discussion for a while, and I have to jump in
  and
side with Gwen here. I see no benefit to putting the configs into
   Zookeeper
entirely, and a lot of downside. The two biggest problems I have with
   this
are:
   
1) Configuration management. OK, so you can write glue for Chef to put
configs into Zookeeper. You also need to write glue for Puppet. And
Cfengine. And everything else out there. Files are an industry standard
practice, they're how just about everyone handles it, and there's
  reasons
for that, not just it's the way it's always been done.
   
2) Auditing. Configuration files can easily be managed in a source
repository system which tracks what changes were made and who made
  them.
   It
also easily allows for rolling back to a previous version. Zookeeper
  does
not.
   
I see absolutely nothing wrong with putting the quota (client) configs
   and
the 

[jira] [Commented] (KAFKA-2150) FetcherThread backoff need to grab lock before wait on condition.

2015-05-11 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538994#comment-14538994
 ] 

Guozhang Wang commented on KAFKA-2150:
--

My bad on missing this but while reviewing. +1 and committed to trunk.

 FetcherThread backoff need to grab lock before wait on condition.
 -

 Key: KAFKA-2150
 URL: https://issues.apache.org/jira/browse/KAFKA-2150
 Project: Kafka
  Issue Type: Bug
Reporter: Jiangjie Qin
Assignee: Sriharsha Chintalapani
  Labels: newbie++
 Attachments: KAFKA-2150.patch, KAFKA-2150_2015-04-25_13:14:05.patch, 
 KAFKA-2150_2015-04-25_13:18:35.patch, KAFKA-2150_2015-04-25_13:35:36.patch


 Saw the following error: 
 kafka.api.ProducerBounceTest  testBrokerFailure STANDARD_OUT
 [2015-04-25 00:40:43,997] ERROR [ReplicaFetcherThread-0-0], Error due to  
 (kafka.server.ReplicaFetcherThread:103)
 java.lang.IllegalMonitorStateException
   at 
 java.util.concurrent.locks.ReentrantLock$Sync.tryRelease(ReentrantLock.java:127)
   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer.release(AbstractQueuedSynchronizer.java:1239)
   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer.fullyRelease(AbstractQueuedSynchronizer.java:1668)
   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2107)
   at 
 kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:95)
   at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
 [2015-04-25 00:40:47,064] ERROR [ReplicaFetcherThread-0-1], Error due to  
 (kafka.server.ReplicaFetcherThread:103)
 java.lang.IllegalMonitorStateException
   at 
 java.util.concurrent.locks.ReentrantLock$Sync.tryRelease(ReentrantLock.java:127)
   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer.release(AbstractQueuedSynchronizer.java:1239)
   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer.fullyRelease(AbstractQueuedSynchronizer.java:1668)
   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2107)
   at 
 kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:95)
   at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
 We should grab the lock before waiting on the condition.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 33551: Patch for KAFKA-2150

2015-05-11 Thread Guozhang Wang

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33551/#review83332
---

Ship it!


Ship It!

- Guozhang Wang


On April 25, 2015, 8:35 p.m., Sriharsha Chintalapani wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/33551/
 ---
 
 (Updated April 25, 2015, 8:35 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-2150
 https://issues.apache.org/jira/browse/KAFKA-2150
 
 
 Repository: kafka
 
 
 Description
 ---
 
 KAFKA-2150. FetcherThread backoff need to grab lock before wait on condition.
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/server/AbstractFetcherThread.scala 
 a439046e118b6efcc3a5a9d9e8acb79f85e40398 
 
 Diff: https://reviews.apache.org/r/33551/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Sriharsha Chintalapani
 




[jira] [Updated] (KAFKA-2150) FetcherThread backoff need to grab lock before wait on condition.

2015-05-11 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-2150:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

 FetcherThread backoff need to grab lock before wait on condition.
 -

 Key: KAFKA-2150
 URL: https://issues.apache.org/jira/browse/KAFKA-2150
 Project: Kafka
  Issue Type: Bug
Reporter: Jiangjie Qin
Assignee: Sriharsha Chintalapani
  Labels: newbie++
 Attachments: KAFKA-2150.patch, KAFKA-2150_2015-04-25_13:14:05.patch, 
 KAFKA-2150_2015-04-25_13:18:35.patch, KAFKA-2150_2015-04-25_13:35:36.patch


 Saw the following error: 
 kafka.api.ProducerBounceTest  testBrokerFailure STANDARD_OUT
 [2015-04-25 00:40:43,997] ERROR [ReplicaFetcherThread-0-0], Error due to  
 (kafka.server.ReplicaFetcherThread:103)
 java.lang.IllegalMonitorStateException
   at 
 java.util.concurrent.locks.ReentrantLock$Sync.tryRelease(ReentrantLock.java:127)
   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer.release(AbstractQueuedSynchronizer.java:1239)
   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer.fullyRelease(AbstractQueuedSynchronizer.java:1668)
   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2107)
   at 
 kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:95)
   at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
 [2015-04-25 00:40:47,064] ERROR [ReplicaFetcherThread-0-1], Error due to  
 (kafka.server.ReplicaFetcherThread:103)
 java.lang.IllegalMonitorStateException
   at 
 java.util.concurrent.locks.ReentrantLock$Sync.tryRelease(ReentrantLock.java:127)
   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer.release(AbstractQueuedSynchronizer.java:1239)
   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer.fullyRelease(AbstractQueuedSynchronizer.java:1668)
   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2107)
   at 
 kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:95)
   at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
 We should grab the lock before waiting on the condition.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 33378: Patch for KAFKA-2136

2015-05-11 Thread Dong Lin

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33378/#review83341
---



core/src/main/scala/kafka/api/FetchResponse.scala
https://reviews.apache.org/r/33378/#comment134302

Should delayTimeSize be deducted from expectedBytesToWrite?


- Dong Lin


On May 11, 2015, 9:51 p.m., Aditya Auradkar wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/33378/
 ---
 
 (Updated May 11, 2015, 9:51 p.m.)
 
 
 Review request for kafka, Joel Koshy and Jun Rao.
 
 
 Bugs: KAFKA-2136
 https://issues.apache.org/jira/browse/KAFKA-2136
 
 
 Repository: kafka
 
 
 Description
 ---
 
 Fixing bug
 
 
 Diffs
 -
 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/internals/Fetcher.java
  ef9dd5238fbc771496029866ece1d85db6d7b7a5 
   
 clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java 
 b2db91ca14bbd17fef5ce85839679144fff3f689 
   clients/src/main/java/org/apache/kafka/common/protocol/Protocol.java 
 3dc8b015afd2347a41c9a9dbc02b8e367da5f75f 
   clients/src/main/java/org/apache/kafka/common/requests/FetchRequest.java 
 8686d83aa52e435c6adafbe9ff4bd1602281072a 
   clients/src/main/java/org/apache/kafka/common/requests/FetchResponse.java 
 eb8951fba48c335095cc43fc3672de1c733e07ff 
   clients/src/main/java/org/apache/kafka/common/requests/ProduceRequest.java 
 fabeae3083a8ea55cdacbb9568f3847ccd85bab4 
   clients/src/main/java/org/apache/kafka/common/requests/ProduceResponse.java 
 37ec0b79beafcf5735c386b066eb319fb697eff5 
   
 clients/src/test/java/org/apache/kafka/clients/consumer/internals/FetcherTest.java
  419541011d652becf0cda7a5e62ce813cddb1732 
   
 clients/src/test/java/org/apache/kafka/clients/producer/internals/SenderTest.java
  8b1805d3d2bcb9fe2bacb37d870c3236aa9532c4 
   
 clients/src/test/java/org/apache/kafka/common/requests/RequestResponseTest.java
  e3cc1967e407b64cc734548c19e30de700b64ba8 
   core/src/main/scala/kafka/api/FetchRequest.scala 
 b038c15186c0cbcc65b59479324052498361b717 
   core/src/main/scala/kafka/api/FetchResponse.scala 
 75aaf57fb76ec01660d93701a57ae953d877d81c 
   core/src/main/scala/kafka/api/ProducerRequest.scala 
 570b2da1d865086f9830aa919a49063abbbe574d 
   core/src/main/scala/kafka/api/ProducerResponse.scala 
 5d1fac4cb8943f5bfaa487f8e9d9d2856efbd330 
   core/src/main/scala/kafka/consumer/SimpleConsumer.scala 
 31a2639477bf66f9a05d2b9b07794572d7ec393b 
   core/src/main/scala/kafka/server/AbstractFetcherThread.scala 
 a439046e118b6efcc3a5a9d9e8acb79f85e40398 
   core/src/main/scala/kafka/server/DelayedFetch.scala 
 de6cf5bdaa0e70394162febc63b50b55ca0a92db 
   core/src/main/scala/kafka/server/DelayedProduce.scala 
 05078b24ef28f2f4e099afa943e43f1d00359fda 
   core/src/main/scala/kafka/server/KafkaApis.scala 
 417960dd1ab407ebebad8fdb0e97415db3e91a2f 
   core/src/main/scala/kafka/server/OffsetManager.scala 
 18680ce100f10035175cc0263ba7787ab0f6a17a 
   core/src/main/scala/kafka/server/ReplicaFetcherThread.scala 
 b31b432a226ba79546dd22ef1d2acbb439c2e9a3 
   core/src/main/scala/kafka/server/ReplicaManager.scala 
 59c9bc3ac3a8afc07a6f8c88c5871304db588d17 
   core/src/test/scala/unit/kafka/api/RequestResponseSerializationTest.scala 
 5717165f2344823fabe8f7cfafae4bb8af2d949a 
   core/src/test/scala/unit/kafka/server/DelayedOperationTest.scala 
 f3ab3f4ff8eb1aa6b2ab87ba75f72eceb6649620 
   core/src/test/scala/unit/kafka/server/ReplicaManagerTest.scala 
 00d59337a99ac135e8689bd1ecd928f7b1423d79 
 
 Diff: https://reviews.apache.org/r/33378/diff/
 
 
 Testing
 ---
 
 New tests added
 
 
 Thanks,
 
 Aditya Auradkar
 




[jira] [Commented] (KAFKA-1997) Refactor Mirror Maker

2015-05-11 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538801#comment-14538801
 ] 

Jiangjie Qin commented on KAFKA-1997:
-

Hey [~junrao], good point. I will do that when I incorporate the close(timeout) 
into Mirror Maker.
Actually I'm a little bit confused why not provide a default 
serializer/deserializer? For new user of Kafka, it might be difficult to find 
right class to use. Right?

 Refactor Mirror Maker
 -

 Key: KAFKA-1997
 URL: https://issues.apache.org/jira/browse/KAFKA-1997
 Project: Kafka
  Issue Type: Improvement
Reporter: Jiangjie Qin
Assignee: Jiangjie Qin
 Attachments: KAFKA-1997.patch, KAFKA-1997.patch, 
 KAFKA-1997_2015-03-03_16:28:46.patch, KAFKA-1997_2015-03-04_15:07:46.patch, 
 KAFKA-1997_2015-03-04_15:42:45.patch, KAFKA-1997_2015-03-05_20:14:58.patch, 
 KAFKA-1997_2015-03-09_18:55:54.patch, KAFKA-1997_2015-03-10_18:31:34.patch, 
 KAFKA-1997_2015-03-11_15:20:18.patch, KAFKA-1997_2015-03-11_19:10:53.patch, 
 KAFKA-1997_2015-03-13_14:43:34.patch, KAFKA-1997_2015-03-17_13:47:01.patch, 
 KAFKA-1997_2015-03-18_12:47:32.patch


 Refactor mirror maker based on KIP-3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 33049: Patch for KAFKA-2084

2015-05-11 Thread Aditya Auradkar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33049/
---

(Updated May 11, 2015, 11:16 p.m.)


Review request for kafka, Joel Koshy and Jun Rao.


Bugs: KAFKA-2084
https://issues.apache.org/jira/browse/KAFKA-2084


Repository: kafka


Description (updated)
---

This is currently not being used anywhere in the code because I haven't yet 
figured out how to enforce delays i.e. purgatory vs delay queue. I'll have a 
better idea once I look at the new purgatory implementation. Hopefully, this 
smaller patch is easier to review.

Added more testcases


Some locking changes for reading/creating the sensors


WIP patch


Sample usage in ReplicaManager


Updated patch for quotas. This patch does the following: 1. Add per-client 
metrics for both producer and consumers 2. Add configuration for quotas 3. 
Compute delay times in the metrics package and return the delay times in 
QuotaViolationException 4. Add a DelayQueue in KafkaApi's that can be used to 
throttle any type of request. Implemented request throttling for produce and 
fetch requests. 5. Added unit and integration test cases. I've not yet added 
integration testcases testing the consumer delays.. will update the patch once 
those are ready


Incorporated Jun's comments


Adding javadoc


KAFKA-2084 - Moved the callbacks to ClientQuotaMetrics


Adding more configs


Don't quota replica traffic


Diffs (updated)
-

  clients/src/main/java/org/apache/kafka/common/metrics/MetricConfig.java 
dfa1b0a11042ad9d127226f0e0cec8b1d42b8441 
  clients/src/main/java/org/apache/kafka/common/metrics/Quota.java 
d82bb0c055e631425bc1ebbc7d387baac76aeeaa 
  
clients/src/main/java/org/apache/kafka/common/metrics/QuotaViolationException.java
 a451e5385c9eca76b38b425e8ac856b2715fcffe 
  clients/src/main/java/org/apache/kafka/common/metrics/Sensor.java 
ca823fd4639523018311b814fde69b6177e73b97 
  clients/src/test/java/org/apache/kafka/common/utils/MockTime.java  
  core/src/main/scala/kafka/server/ClientQuotaMetrics.scala PRE-CREATION 
  core/src/main/scala/kafka/server/KafkaApis.scala 
417960dd1ab407ebebad8fdb0e97415db3e91a2f 
  core/src/main/scala/kafka/server/KafkaConfig.scala 
9efa15ca5567b295ab412ee9eea7c03eb4cdc18b 
  core/src/main/scala/kafka/server/KafkaServer.scala 
b7d2a2842e17411a823b93bdedc84657cbd62be1 
  core/src/main/scala/kafka/server/ReplicaManager.scala 
59c9bc3ac3a8afc07a6f8c88c5871304db588d17 
  core/src/main/scala/kafka/server/ThrottledRequest.scala PRE-CREATION 
  core/src/main/scala/kafka/utils/ShutdownableThread.scala 
fc226c863095b7761290292cd8755cd7ad0f155c 
  core/src/test/scala/integration/kafka/api/QuotasTest.scala PRE-CREATION 
  core/src/test/scala/unit/kafka/server/ClientQuotaMetricsTest.scala 
PRE-CREATION 
  core/src/test/scala/unit/kafka/server/KafkaConfigConfigDefTest.scala 
8014a5a6c362785539f24eb03d77278434614fe6 
  core/src/test/scala/unit/kafka/server/ThrottledRequestExpirationTest.scala 
PRE-CREATION 

Diff: https://reviews.apache.org/r/33049/diff/


Testing
---


Thanks,

Aditya Auradkar



[jira] [Commented] (KAFKA-2084) byte rate metrics per client ID (producer and consumer)

2015-05-11 Thread Aditya A Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538850#comment-14538850
 ] 

Aditya A Auradkar commented on KAFKA-2084:
--

Updated reviewboard https://reviews.apache.org/r/33049/diff/
 against branch origin/trunk

 byte rate metrics per client ID (producer and consumer)
 ---

 Key: KAFKA-2084
 URL: https://issues.apache.org/jira/browse/KAFKA-2084
 Project: Kafka
  Issue Type: Sub-task
Reporter: Aditya Auradkar
Assignee: Aditya Auradkar
 Attachments: KAFKA-2084.patch, KAFKA-2084_2015-04-09_18:10:56.patch, 
 KAFKA-2084_2015-04-10_17:24:34.patch, KAFKA-2084_2015-04-21_12:21:18.patch, 
 KAFKA-2084_2015-04-21_12:28:05.patch, KAFKA-2084_2015-05-05_15:27:35.patch, 
 KAFKA-2084_2015-05-05_17:52:02.patch, KAFKA-2084_2015-05-11_16:16:01.patch


 We need to be able to track the bytes-in/bytes-out rate on a per-client ID 
 basis. This is necessary for quotas.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2084) byte rate metrics per client ID (producer and consumer)

2015-05-11 Thread Aditya A Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya A Auradkar updated KAFKA-2084:
-
Attachment: KAFKA-2084_2015-05-11_16:16:01.patch

 byte rate metrics per client ID (producer and consumer)
 ---

 Key: KAFKA-2084
 URL: https://issues.apache.org/jira/browse/KAFKA-2084
 Project: Kafka
  Issue Type: Sub-task
Reporter: Aditya Auradkar
Assignee: Aditya Auradkar
 Attachments: KAFKA-2084.patch, KAFKA-2084_2015-04-09_18:10:56.patch, 
 KAFKA-2084_2015-04-10_17:24:34.patch, KAFKA-2084_2015-04-21_12:21:18.patch, 
 KAFKA-2084_2015-04-21_12:28:05.patch, KAFKA-2084_2015-05-05_15:27:35.patch, 
 KAFKA-2084_2015-05-05_17:52:02.patch, KAFKA-2084_2015-05-11_16:16:01.patch


 We need to be able to track the bytes-in/bytes-out rate on a per-client ID 
 basis. This is necessary for quotas.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 33620: Patch for KAFKA-1690

2015-05-11 Thread Sriharsha Chintalapani

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33620/
---

(Updated May 11, 2015, 11:09 p.m.)


Review request for kafka.


Bugs: KAFKA-1690
https://issues.apache.org/jira/browse/KAFKA-1690


Repository: kafka


Description (updated)
---

KAFKA-1690. new java producer needs ssl support as a client.


KAFKA-1690. new java producer needs ssl support as a client.


KAFKA-1690. new java producer needs ssl support as a client.


KAFKA-1690. new java producer needs ssl support as a client. SSLFactory tests.


Diffs (updated)
-

  build.gradle fef515b3b2276b1f861e7cc2e33e74c3ce5e405b 
  checkstyle/checkstyle.xml a215ff36e9252879f1e0be5a86fef9a875bb8f38 
  checkstyle/import-control.xml f2e6cec267e67ce8e261341e373718e14a8e8e03 
  clients/src/main/java/org/apache/kafka/clients/ClientUtils.java 
0d68bf1e1e90fe9d5d4397ddf817b9a9af8d9f7a 
  clients/src/main/java/org/apache/kafka/clients/CommonClientConfigs.java 
cf32e4e7c40738fe6d8adc36ae0cfad459ac5b0b 
  clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java 
bdff518b732105823058e6182f445248b45dc388 
  clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
d301be4709f7b112e1f3a39f3c04cfa65f00fa60 
  clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
42b12928781463b56fc4a45d96bb4da2745b6d95 
  clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
187d0004c8c46b6664ddaffecc6166d4b47351e5 
  clients/src/main/java/org/apache/kafka/common/config/AbstractConfig.java 
c4fa058692f50abb4f47bd344119d805c60123f5 
  clients/src/main/java/org/apache/kafka/common/network/Authenticator.java 
PRE-CREATION 
  clients/src/main/java/org/apache/kafka/common/network/Channel.java 
PRE-CREATION 
  
clients/src/main/java/org/apache/kafka/common/network/DefaultAuthenticator.java 
PRE-CREATION 
  
clients/src/main/java/org/apache/kafka/common/network/PlainTextTransportLayer.java
 PRE-CREATION 
  clients/src/main/java/org/apache/kafka/common/network/SSLFactory.java 
PRE-CREATION 
  clients/src/main/java/org/apache/kafka/common/network/SSLTransportLayer.java 
PRE-CREATION 
  clients/src/main/java/org/apache/kafka/common/network/Selectable.java 
b5f8d83e89f9026dc0853e5f92c00b2d7f043e22 
  clients/src/main/java/org/apache/kafka/common/network/Selector.java 
57de0585e5e9a53eb9dcd99cac1ab3eb2086a302 
  clients/src/main/java/org/apache/kafka/common/network/TransportLayer.java 
PRE-CREATION 
  clients/src/main/java/org/apache/kafka/common/protocol/SecurityProtocol.java 
dab1a94dd29563688b6ecf4eeb0e180b06049d3f 
  clients/src/main/java/org/apache/kafka/common/utils/Utils.java 
f73eedb030987f018d8446bb1dcd98d19fa97331 
  clients/src/test/java/org/apache/kafka/common/network/EchoServer.java 
PRE-CREATION 
  clients/src/test/java/org/apache/kafka/common/network/SSLFactoryTest.java 
PRE-CREATION 
  clients/src/test/java/org/apache/kafka/common/network/SSLSelectorTest.java 
PRE-CREATION 
  clients/src/test/java/org/apache/kafka/common/network/SelectorTest.java 
d5b306b026e788b4e5479f3419805aa49ae889f3 
  clients/src/test/java/org/apache/kafka/common/utils/UtilsTest.java 
2ebe3c21f611dc133a2dbb8c7dfb0845f8c21498 
  clients/src/test/java/org/apache/kafka/test/TestSSLUtils.java PRE-CREATION 

Diff: https://reviews.apache.org/r/33620/diff/


Testing
---


Thanks,

Sriharsha Chintalapani



[jira] [Commented] (KAFKA-1690) new java producer needs ssl support as a client

2015-05-11 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538838#comment-14538838
 ] 

Sriharsha Chintalapani commented on KAFKA-1690:
---

Updated reviewboard https://reviews.apache.org/r/33620/diff/
 against branch origin/trunk

 new java producer needs ssl support as a client
 ---

 Key: KAFKA-1690
 URL: https://issues.apache.org/jira/browse/KAFKA-1690
 Project: Kafka
  Issue Type: Sub-task
Reporter: Joe Stein
Assignee: Sriharsha Chintalapani
 Fix For: 0.8.3

 Attachments: KAFKA-1690.patch, KAFKA-1690.patch, 
 KAFKA-1690_2015-05-10_23:20:30.patch, KAFKA-1690_2015-05-10_23:31:42.patch, 
 KAFKA-1690_2015-05-11_16:09:36.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1690) new java producer needs ssl support as a client

2015-05-11 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani updated KAFKA-1690:
--
Attachment: KAFKA-1690_2015-05-11_16:09:36.patch

 new java producer needs ssl support as a client
 ---

 Key: KAFKA-1690
 URL: https://issues.apache.org/jira/browse/KAFKA-1690
 Project: Kafka
  Issue Type: Sub-task
Reporter: Joe Stein
Assignee: Sriharsha Chintalapani
 Fix For: 0.8.3

 Attachments: KAFKA-1690.patch, KAFKA-1690.patch, 
 KAFKA-1690_2015-05-10_23:20:30.patch, KAFKA-1690_2015-05-10_23:31:42.patch, 
 KAFKA-1690_2015-05-11_16:09:36.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 33049: Patch for KAFKA-2084

2015-05-11 Thread Aditya Auradkar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33049/
---

(Updated May 11, 2015, 11:17 p.m.)


Review request for kafka, Joel Koshy and Jun Rao.


Bugs: KAFKA-2084
https://issues.apache.org/jira/browse/KAFKA-2084


Repository: kafka


Description (updated)
---

Updated patch for quotas. This patch does the following: 
1. Add per-client metrics for both producer and consumers 
2. Add configuration for quotas 
3. Compute delay times in the metrics package and return the delay times in 
QuotaViolationException 
4. Add a DelayQueue in KafkaApi's that can be used to throttle any type of 
request. Implemented request throttling for produce and fetch requests. 
5. Added unit and integration test cases.
6. This doesn't include a system test. There is a separate ticket for that


Diffs
-

  clients/src/main/java/org/apache/kafka/common/metrics/MetricConfig.java 
dfa1b0a11042ad9d127226f0e0cec8b1d42b8441 
  clients/src/main/java/org/apache/kafka/common/metrics/Quota.java 
d82bb0c055e631425bc1ebbc7d387baac76aeeaa 
  
clients/src/main/java/org/apache/kafka/common/metrics/QuotaViolationException.java
 a451e5385c9eca76b38b425e8ac856b2715fcffe 
  clients/src/main/java/org/apache/kafka/common/metrics/Sensor.java 
ca823fd4639523018311b814fde69b6177e73b97 
  clients/src/test/java/org/apache/kafka/common/utils/MockTime.java  
  core/src/main/scala/kafka/server/ClientQuotaMetrics.scala PRE-CREATION 
  core/src/main/scala/kafka/server/KafkaApis.scala 
417960dd1ab407ebebad8fdb0e97415db3e91a2f 
  core/src/main/scala/kafka/server/KafkaConfig.scala 
9efa15ca5567b295ab412ee9eea7c03eb4cdc18b 
  core/src/main/scala/kafka/server/KafkaServer.scala 
b7d2a2842e17411a823b93bdedc84657cbd62be1 
  core/src/main/scala/kafka/server/ReplicaManager.scala 
59c9bc3ac3a8afc07a6f8c88c5871304db588d17 
  core/src/main/scala/kafka/server/ThrottledRequest.scala PRE-CREATION 
  core/src/main/scala/kafka/utils/ShutdownableThread.scala 
fc226c863095b7761290292cd8755cd7ad0f155c 
  core/src/test/scala/integration/kafka/api/QuotasTest.scala PRE-CREATION 
  core/src/test/scala/unit/kafka/server/ClientQuotaMetricsTest.scala 
PRE-CREATION 
  core/src/test/scala/unit/kafka/server/KafkaConfigConfigDefTest.scala 
8014a5a6c362785539f24eb03d77278434614fe6 
  core/src/test/scala/unit/kafka/server/ThrottledRequestExpirationTest.scala 
PRE-CREATION 

Diff: https://reviews.apache.org/r/33049/diff/


Testing
---


Thanks,

Aditya Auradkar



[jira] [Commented] (KAFKA-1997) Refactor Mirror Maker

2015-05-11 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538762#comment-14538762
 ] 

Jun Rao commented on KAFKA-1997:


A late comment: shouldn't we hardcode the key/value serializer to be 
ByteSerializer in the producer? Both of them are required properties.

 Refactor Mirror Maker
 -

 Key: KAFKA-1997
 URL: https://issues.apache.org/jira/browse/KAFKA-1997
 Project: Kafka
  Issue Type: Improvement
Reporter: Jiangjie Qin
Assignee: Jiangjie Qin
 Attachments: KAFKA-1997.patch, KAFKA-1997.patch, 
 KAFKA-1997_2015-03-03_16:28:46.patch, KAFKA-1997_2015-03-04_15:07:46.patch, 
 KAFKA-1997_2015-03-04_15:42:45.patch, KAFKA-1997_2015-03-05_20:14:58.patch, 
 KAFKA-1997_2015-03-09_18:55:54.patch, KAFKA-1997_2015-03-10_18:31:34.patch, 
 KAFKA-1997_2015-03-11_15:20:18.patch, KAFKA-1997_2015-03-11_19:10:53.patch, 
 KAFKA-1997_2015-03-13_14:43:34.patch, KAFKA-1997_2015-03-17_13:47:01.patch, 
 KAFKA-1997_2015-03-18_12:47:32.patch


 Refactor mirror maker based on KIP-3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2103) kafka.producer.AsyncProducerTest failure.

2015-05-11 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2103:
---
Reviewer: Jun Rao

 kafka.producer.AsyncProducerTest failure.
 -

 Key: KAFKA-2103
 URL: https://issues.apache.org/jira/browse/KAFKA-2103
 Project: Kafka
  Issue Type: Sub-task
Reporter: Jiangjie Qin
Assignee: Ewen Cheslack-Postava
 Attachments: KAFKA-2103.patch


 I saw this test consistently failing on trunk.
 The recent changes are KAFKA-2099, KAFKA-1926, KAFKA-1809.
 kafka.producer.AsyncProducerTest  testNoBroker FAILED
 org.scalatest.junit.JUnitTestFailedError: Should fail with 
 FailedToSendMessageException
 at 
 org.scalatest.junit.AssertionsForJUnit$class.newAssertionFailedException(AssertionsForJUnit.scala:101)
 at 
 org.scalatest.junit.JUnit3Suite.newAssertionFailedException(JUnit3Suite.scala:149)
 at org.scalatest.Assertions$class.fail(Assertions.scala:711)
 at org.scalatest.junit.JUnit3Suite.fail(JUnit3Suite.scala:149)
 at 
 kafka.producer.AsyncProducerTest.testNoBroker(AsyncProducerTest.scala:300)
 kafka.producer.AsyncProducerTest  testIncompatibleEncoder PASSED
 kafka.producer.AsyncProducerTest  testRandomPartitioner PASSED
 kafka.producer.AsyncProducerTest  testFailedSendRetryLogic FAILED
 kafka.common.FailedToSendMessageException: Failed to send messages after 
 3 tries.
 at 
 kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:91)
 at 
 kafka.producer.AsyncProducerTest.testFailedSendRetryLogic(AsyncProducerTest.scala:415)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2167) ZkUtils updateEphemeralPath JavaDoc (spelling and correctness)

2015-05-11 Thread Neelesh Srinivas Salian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neelesh Srinivas Salian reassigned KAFKA-2167:
--

Assignee: Neelesh Srinivas Salian

 ZkUtils updateEphemeralPath JavaDoc (spelling and correctness)
 --

 Key: KAFKA-2167
 URL: https://issues.apache.org/jira/browse/KAFKA-2167
 Project: Kafka
  Issue Type: Bug
Reporter: Jon Bringhurst
Assignee: Neelesh Srinivas Salian
  Labels: newbie

 I'm not 100% sure on this, but it seems like persistent should instead say 
 ephemeral in the JavaDoc. Also, note that parrent is misspelled.
 {noformat}
   /**
* Update the value of a persistent node with the given path and data.
* create parrent directory if necessary. Never throw NodeExistException.
*/
   def updateEphemeralPath(client: ZkClient, path: String, data: String): Unit 
 = {
 try {
   client.writeData(path, data)
 }
 catch {
   case e: ZkNoNodeException = {
 createParentPath(client, path)
 client.createEphemeral(path, data)
   }
   case e2 = throw e2
 }
   }
 {noformat}
 should be:
 {noformat}
   /**
* Update the value of an ephemeral node with the given path and data.
* create parent directory if necessary. Never throw NodeExistException.
*/
   def updateEphemeralPath(client: ZkClient, path: String, data: String): Unit 
 = {
 try {
   client.writeData(path, data)
 }
 catch {
   case e: ZkNoNodeException = {
 createParentPath(client, path)
 client.createEphemeral(path, data)
   }
   case e2 = throw e2
 }
   }
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: Corrected the Changes in ZkUtils.scala - KAFKA...

2015-05-11 Thread neeleshCloud
GitHub user neeleshCloud opened a pull request:

https://github.com/apache/kafka/pull/63

Corrected the Changes in ZkUtils.scala - KAFKA-2167

Corrected Spelling errors.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/neeleshCloud/kafka 0.8.2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/63.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #63


commit 904af08e2437d2ddf57017a3f3f1decc2087c491
Author: Neelesh Srinivas Salian nsal...@cloudera.com
Date:   2015-05-12T05:17:42Z

KAFKA-2167




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2167) ZkUtils updateEphemeralPath JavaDoc (spelling and correctness)

2015-05-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14539267#comment-14539267
 ] 

ASF GitHub Bot commented on KAFKA-2167:
---

GitHub user neeleshCloud opened a pull request:

https://github.com/apache/kafka/pull/63

Corrected the Changes in ZkUtils.scala - KAFKA-2167

Corrected Spelling errors.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/neeleshCloud/kafka 0.8.2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/63.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #63


commit 904af08e2437d2ddf57017a3f3f1decc2087c491
Author: Neelesh Srinivas Salian nsal...@cloudera.com
Date:   2015-05-12T05:17:42Z

KAFKA-2167




 ZkUtils updateEphemeralPath JavaDoc (spelling and correctness)
 --

 Key: KAFKA-2167
 URL: https://issues.apache.org/jira/browse/KAFKA-2167
 Project: Kafka
  Issue Type: Bug
Reporter: Jon Bringhurst
Assignee: Neelesh Srinivas Salian
  Labels: newbie

 I'm not 100% sure on this, but it seems like persistent should instead say 
 ephemeral in the JavaDoc. Also, note that parrent is misspelled.
 {noformat}
   /**
* Update the value of a persistent node with the given path and data.
* create parrent directory if necessary. Never throw NodeExistException.
*/
   def updateEphemeralPath(client: ZkClient, path: String, data: String): Unit 
 = {
 try {
   client.writeData(path, data)
 }
 catch {
   case e: ZkNoNodeException = {
 createParentPath(client, path)
 client.createEphemeral(path, data)
   }
   case e2 = throw e2
 }
   }
 {noformat}
 should be:
 {noformat}
   /**
* Update the value of an ephemeral node with the given path and data.
* create parent directory if necessary. Never throw NodeExistException.
*/
   def updateEphemeralPath(client: ZkClient, path: String, data: String): Unit 
 = {
 try {
   client.writeData(path, data)
 }
 catch {
   case e: ZkNoNodeException = {
 createParentPath(client, path)
 client.createEphemeral(path, data)
   }
   case e2 = throw e2
 }
   }
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-21 Configuration Management

2015-05-11 Thread Ashish Singh
I agree with the Joel's suggestion on keeping broker's configs in
config file and clients/topics config in ZK. Few other projects, Apache
Solr for one, also does something similar for its configurations.

On Monday, May 11, 2015, Gwen Shapira gshap...@cloudera.com wrote:

 I like this approach (obviously).
 I am also OK with supporting broker re-read of config file based on ZK
 watch instead of SIGHUP, if we see this as more consistent with the rest of
 our code base.

 Either is fine by me as long as brokers keep the file and just do refresh
 :)

 On Tue, May 12, 2015 at 2:54 AM, Joel Koshy jjkosh...@gmail.com
 javascript:; wrote:

  So the general concern here is the dichotomy of configs (which we
  already have - i.e., in the form of broker config files vs topic
  configs in zookeeper). We (at LinkedIn) had some discussions on this
  last week and had this very question for the operations team whose
  opinion is I think to a large degree a touchstone for this decision:
  Has the operations team at LinkedIn experienced any pain so far with
  managing topic configs in ZooKeeper (while broker configs are
  file-based)? It turns out that ops overwhelmingly favors the current
  approach. i.e., service configs as file-based configs and client/topic
  configs in ZooKeeper is intuitive and works great. This may be
  somewhat counter-intuitive to devs, but this is one of those decisions
  for which ops input is very critical - because for all practical
  purposes, they are the users in this discussion.
 
  If we continue with this dichotomy and need to support dynamic config
  for client/topic configs as well as select service configs then there
  will need to be dichotomy in the config change mechanism as well.
  i.e., client/topic configs will change via (say) a ZooKeeper watch and
  the service config will change via a config file re-read (on SIGHUP)
  after config changes have been pushed out to local files. Is this a
  bad thing? Personally, I don't think it is - i.e. I'm in favor of this
  approach. What do others think?
 
  Thanks,
 
  Joel
 
  On Mon, May 11, 2015 at 11:08:44PM +0300, Gwen Shapira wrote:
   What Todd said :)
  
   (I think my ops background is showing...)
  
   On Mon, May 11, 2015 at 10:17 PM, Todd Palino tpal...@gmail.com
 javascript:; wrote:
  
I understand your point here, Jay, but I disagree that we can't have
  two
configuration systems. We have two different types of configuration
information. We have configuration that relates to the service itself
  (the
Kafka broker), and we have configuration that relates to the content
  within
the service (topics). I would put the client configuration (quotas)
 in
  the
with the second part, as it is dynamic information. I just don't see
 a
  good
argument for effectively degrading the configuration for the service
because of trying to keep it paired with the configuration of dynamic
resources.
   
-Todd
   
On Mon, May 11, 2015 at 11:33 AM, Jay Kreps jay.kr...@gmail.com
 javascript:;
  wrote:
   
 I totally agree that ZK is not in-and-of-itself a configuration
management
 solution and it would be better if we could just keep all our
 config
  in
 files. Anyone who has followed the various config discussions over
  the
past
 few years of discussion knows I'm the biggest proponent of
 immutable
 file-driven config.

 The analogy to normal unix services isn't actually quite right
  though.
 The problem Kafka has is that a number of the configurable entities
  it
 manages are added dynamically--topics, clients, consumer groups,
 etc.
What
 this actually resembles is not a unix services like HTTPD but a
  database,
 and databases typically do manage config dynamically for exactly
 the
  same
 reason.

 The last few emails are arguing that files  ZK as a config
  solution. I
 agree with this, but that isn't really the question, right?The
  reality is
 that we need to be able to configure dynamically created entities
  and we
 won't get a satisfactory solution to that using files (e.g. rsync
 is
  not
an
 acceptable topic creation mechanism). What we are discussing is
  having a
 single config mechanism or multiple. If we have multiple you need
 to
solve
 the whole config lifecycle problem for both--management, audit,
  rollback,
 etc.

 Gwen, you were saying we couldn't get rid of the configuration
 file,
  not
 sure if I understand. Is that because we need to give the URL for
 ZK?
 Wouldn't the same argument work to say that we can't use
  configuration
 files because we have to specify the file path? I think we can just
  give
 the server the same --zookeeper argument we use everywhere else,
  right?

 -Jay

 On Sun, May 10, 2015 at 11:28 AM, Todd Palino tpal...@gmail.com
 javascript:;
  wrote:

  I've been watching this discussion for a while, and I 

[jira] [Commented] (KAFKA-2175) Reduce server log verbosity at info level

2015-05-11 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14539293#comment-14539293
 ] 

Gwen Shapira commented on KAFKA-2175:
-

Non-binding +1. 
Probably the most useful 2-line change that can be done in Kafka.

It should get backported to 0.8.2 if we plan another release off that branch.

 Reduce server log verbosity at info level
 -

 Key: KAFKA-2175
 URL: https://issues.apache.org/jira/browse/KAFKA-2175
 Project: Kafka
  Issue Type: Improvement
  Components: controller, zkclient
Affects Versions: 0.8.3
Reporter: Todd Palino
Assignee: Todd Palino
Priority: Minor
  Labels: newbie
 Attachments: KAFKA-2175.patch


 Currently, the broker logs two messages at INFO level that should be at a 
 lower level. This serves only to fill up log files on disk, and can cause 
 performance issues due to synchronous logging as well.
 The first is the Closing socket connection message when there is no error. 
 This should be reduced to debug level. The second is the message that ZkUtil 
 writes when updating the partition reassignment JSON. This message contains 
 the entire JSON blob and should never be written at info level. In addition, 
 there is already a message in the controller log stating that the ZK node has 
 been updated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2175) Reduce server log verbosity at info level

2015-05-11 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2175:
---
Reviewer: Joel Koshy

 Reduce server log verbosity at info level
 -

 Key: KAFKA-2175
 URL: https://issues.apache.org/jira/browse/KAFKA-2175
 Project: Kafka
  Issue Type: Improvement
  Components: controller, zkclient
Affects Versions: 0.8.3
Reporter: Todd Palino
Assignee: Todd Palino
Priority: Minor
  Labels: newbie
 Attachments: KAFKA-2175.patch


 Currently, the broker logs two messages at INFO level that should be at a 
 lower level. This serves only to fill up log files on disk, and can cause 
 performance issues due to synchronous logging as well.
 The first is the Closing socket connection message when there is no error. 
 This should be reduced to debug level. The second is the message that ZkUtil 
 writes when updating the partition reassignment JSON. This message contains 
 the entire JSON blob and should never be written at info level. In addition, 
 there is already a message in the controller log stating that the ZK node has 
 been updated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2174) Wrong TopicMetadata deserialization

2015-05-11 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2174:
---
Reviewer: Jun Rao

 Wrong TopicMetadata deserialization
 ---

 Key: KAFKA-2174
 URL: https://issues.apache.org/jira/browse/KAFKA-2174
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.2.1
Reporter: Alexey Ozeritskiy
 Attachments: KAFKA-2174.patch


 TopicMetadata.readFrom assumes that ByteBuffer always contains the full set 
 of partitions but it is not true. On incomplete metadata we will get 
 java.lang.ArrayIndexOutOfBoundsException:
 {code}
 java.lang.ArrayIndexOutOfBoundsException: 47
 at 
 kafka.api.TopicMetadata$$anonfun$readFrom$1.apply$mcVI$sp(TopicMetadata.scala:38)
 at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
 at kafka.api.TopicMetadata$.readFrom(TopicMetadata.scala:36)
 at 
 kafka.api.TopicMetadataResponse$$anonfun$3.apply(TopicMetadataResponse.scala:31)
 at 
 kafka.api.TopicMetadataResponse$$anonfun$3.apply(TopicMetadataResponse.scala:31)
 at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
 at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
 at scala.collection.immutable.Range.foreach(Range.scala:141)
 at 
 scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
 at scala.collection.AbstractTraversable.map(Traversable.scala:105)
 at 
 kafka.api.TopicMetadataResponse$.readFrom(TopicMetadataResponse.scala:31)
 {code}
 We sometimes get this exceptions on any broker restart (kill -TERM, 
 controlled.shutdown.enable=false).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >