[jira] [Commented] (KAFKA-2055) ConsumerBounceTest.testSeekAndCommitWithBrokerFailures transient failure

2015-04-10 Thread Fangmin Lv (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14490758#comment-14490758
 ] 

Fangmin Lv commented on KAFKA-2055:
---

Hi Guozhang,

I cannot find the ConsumerBounceTest.scala, but I do see this failure occurred:

kafka.api.ConsumerTest  testSeekAndCommitWithBrokerFailures FAILED
java.lang.AssertionError: expected:1000 but was:790
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.failNotEquals(Assert.java:689)
at org.junit.Assert.assertEquals(Assert.java:127)
at org.junit.Assert.assertEquals(Assert.java:514)
at org.junit.Assert.assertEquals(Assert.java:498)
at 
kafka.api.ConsumerTest.seekAndCommitWithBrokerFailures(ConsumerTest.scala:201)
at 
kafka.api.ConsumerTest.testSeekAndCommitWithBrokerFailures(ConsumerTest.scala:182)

Best,
Fangmin

 ConsumerBounceTest.testSeekAndCommitWithBrokerFailures transient failure
 

 Key: KAFKA-2055
 URL: https://issues.apache.org/jira/browse/KAFKA-2055
 Project: Kafka
  Issue Type: Sub-task
Reporter: Guozhang Wang
  Labels: newbie

 {code}
 kafka.api.ConsumerBounceTest  testSeekAndCommitWithBrokerFailures FAILED
 java.lang.AssertionError: expected:1000 but was:976
 at org.junit.Assert.fail(Assert.java:92)
 at org.junit.Assert.failNotEquals(Assert.java:689)
 at org.junit.Assert.assertEquals(Assert.java:127)
 at org.junit.Assert.assertEquals(Assert.java:514)
 at org.junit.Assert.assertEquals(Assert.java:498)
 at 
 kafka.api.ConsumerBounceTest.seekAndCommitWithBrokerFailures(ConsumerBounceTest.scala:117)
 at 
 kafka.api.ConsumerBounceTest.testSeekAndCommitWithBrokerFailures(ConsumerBounceTest.scala:98)
 kafka.api.ConsumerBounceTest  testSeekAndCommitWithBrokerFailures FAILED
 java.lang.AssertionError: expected:1000 but was:913
 at org.junit.Assert.fail(Assert.java:92)
 at org.junit.Assert.failNotEquals(Assert.java:689)
 at org.junit.Assert.assertEquals(Assert.java:127)
 at org.junit.Assert.assertEquals(Assert.java:514)
 at org.junit.Assert.assertEquals(Assert.java:498)
 at 
 kafka.api.ConsumerBounceTest.seekAndCommitWithBrokerFailures(ConsumerBounceTest.scala:117)
 at 
 kafka.api.ConsumerBounceTest.testSeekAndCommitWithBrokerFailures(ConsumerBounceTest.scala:98)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2055) ConsumerBounceTest.testSeekAndCommitWithBrokerFailures transient failure

2015-04-10 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14490762#comment-14490762
 ] 

Sriharsha Chintalapani commented on KAFKA-2055:
---

[~lvfangmin] I can see it under 
find . -name ConsumerBounceTest.scala   

   
./core/src/test/scala/integration/kafka/api/ConsumerBounceTest.scala

 ConsumerBounceTest.testSeekAndCommitWithBrokerFailures transient failure
 

 Key: KAFKA-2055
 URL: https://issues.apache.org/jira/browse/KAFKA-2055
 Project: Kafka
  Issue Type: Sub-task
Reporter: Guozhang Wang
  Labels: newbie

 {code}
 kafka.api.ConsumerBounceTest  testSeekAndCommitWithBrokerFailures FAILED
 java.lang.AssertionError: expected:1000 but was:976
 at org.junit.Assert.fail(Assert.java:92)
 at org.junit.Assert.failNotEquals(Assert.java:689)
 at org.junit.Assert.assertEquals(Assert.java:127)
 at org.junit.Assert.assertEquals(Assert.java:514)
 at org.junit.Assert.assertEquals(Assert.java:498)
 at 
 kafka.api.ConsumerBounceTest.seekAndCommitWithBrokerFailures(ConsumerBounceTest.scala:117)
 at 
 kafka.api.ConsumerBounceTest.testSeekAndCommitWithBrokerFailures(ConsumerBounceTest.scala:98)
 kafka.api.ConsumerBounceTest  testSeekAndCommitWithBrokerFailures FAILED
 java.lang.AssertionError: expected:1000 but was:913
 at org.junit.Assert.fail(Assert.java:92)
 at org.junit.Assert.failNotEquals(Assert.java:689)
 at org.junit.Assert.assertEquals(Assert.java:127)
 at org.junit.Assert.assertEquals(Assert.java:514)
 at org.junit.Assert.assertEquals(Assert.java:498)
 at 
 kafka.api.ConsumerBounceTest.seekAndCommitWithBrokerFailures(ConsumerBounceTest.scala:117)
 at 
 kafka.api.ConsumerBounceTest.testSeekAndCommitWithBrokerFailures(ConsumerBounceTest.scala:98)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2055) ConsumerBounceTest.testSeekAndCommitWithBrokerFailures transient failure

2015-04-10 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14490780#comment-14490780
 ] 

Ewen Cheslack-Postava commented on KAFKA-2055:
--

[~lvfangmin] You may need to pull to get up to date, some tests were moved 
recently in commit 6adaffd8. A couple of tests moved from 
core/src/test/scala/integration/kafka/api/ConsumerTest.scala to 
core/src/test/scala/integration/kafka/api/ConsumerBounceTest.scala.

 ConsumerBounceTest.testSeekAndCommitWithBrokerFailures transient failure
 

 Key: KAFKA-2055
 URL: https://issues.apache.org/jira/browse/KAFKA-2055
 Project: Kafka
  Issue Type: Sub-task
Reporter: Guozhang Wang
  Labels: newbie

 {code}
 kafka.api.ConsumerBounceTest  testSeekAndCommitWithBrokerFailures FAILED
 java.lang.AssertionError: expected:1000 but was:976
 at org.junit.Assert.fail(Assert.java:92)
 at org.junit.Assert.failNotEquals(Assert.java:689)
 at org.junit.Assert.assertEquals(Assert.java:127)
 at org.junit.Assert.assertEquals(Assert.java:514)
 at org.junit.Assert.assertEquals(Assert.java:498)
 at 
 kafka.api.ConsumerBounceTest.seekAndCommitWithBrokerFailures(ConsumerBounceTest.scala:117)
 at 
 kafka.api.ConsumerBounceTest.testSeekAndCommitWithBrokerFailures(ConsumerBounceTest.scala:98)
 kafka.api.ConsumerBounceTest  testSeekAndCommitWithBrokerFailures FAILED
 java.lang.AssertionError: expected:1000 but was:913
 at org.junit.Assert.fail(Assert.java:92)
 at org.junit.Assert.failNotEquals(Assert.java:689)
 at org.junit.Assert.assertEquals(Assert.java:127)
 at org.junit.Assert.assertEquals(Assert.java:514)
 at org.junit.Assert.assertEquals(Assert.java:498)
 at 
 kafka.api.ConsumerBounceTest.seekAndCommitWithBrokerFailures(ConsumerBounceTest.scala:117)
 at 
 kafka.api.ConsumerBounceTest.testSeekAndCommitWithBrokerFailures(ConsumerBounceTest.scala:98)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2055) ConsumerBounceTest.testSeekAndCommitWithBrokerFailures transient failure

2015-04-10 Thread Fangmin Lv (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14490792#comment-14490792
 ] 

Fangmin Lv commented on KAFKA-2055:
---

[~ewencp] I can see the test case after pulling the latest code base, thanks 
for your help.

 ConsumerBounceTest.testSeekAndCommitWithBrokerFailures transient failure
 

 Key: KAFKA-2055
 URL: https://issues.apache.org/jira/browse/KAFKA-2055
 Project: Kafka
  Issue Type: Sub-task
Reporter: Guozhang Wang
  Labels: newbie

 {code}
 kafka.api.ConsumerBounceTest  testSeekAndCommitWithBrokerFailures FAILED
 java.lang.AssertionError: expected:1000 but was:976
 at org.junit.Assert.fail(Assert.java:92)
 at org.junit.Assert.failNotEquals(Assert.java:689)
 at org.junit.Assert.assertEquals(Assert.java:127)
 at org.junit.Assert.assertEquals(Assert.java:514)
 at org.junit.Assert.assertEquals(Assert.java:498)
 at 
 kafka.api.ConsumerBounceTest.seekAndCommitWithBrokerFailures(ConsumerBounceTest.scala:117)
 at 
 kafka.api.ConsumerBounceTest.testSeekAndCommitWithBrokerFailures(ConsumerBounceTest.scala:98)
 kafka.api.ConsumerBounceTest  testSeekAndCommitWithBrokerFailures FAILED
 java.lang.AssertionError: expected:1000 but was:913
 at org.junit.Assert.fail(Assert.java:92)
 at org.junit.Assert.failNotEquals(Assert.java:689)
 at org.junit.Assert.assertEquals(Assert.java:127)
 at org.junit.Assert.assertEquals(Assert.java:514)
 at org.junit.Assert.assertEquals(Assert.java:498)
 at 
 kafka.api.ConsumerBounceTest.seekAndCommitWithBrokerFailures(ConsumerBounceTest.scala:117)
 at 
 kafka.api.ConsumerBounceTest.testSeekAndCommitWithBrokerFailures(ConsumerBounceTest.scala:98)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2068) Replace OffsetCommit Request/Response with org.apache.kafka.common.requests equivalent

2015-04-10 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14490790#comment-14490790
 ] 

Guozhang Wang commented on KAFKA-2068:
--

Yeah  I can take on this.

 Replace OffsetCommit Request/Response with  org.apache.kafka.common.requests  
 equivalent
 

 Key: KAFKA-2068
 URL: https://issues.apache.org/jira/browse/KAFKA-2068
 Project: Kafka
  Issue Type: Sub-task
Reporter: Gwen Shapira
 Fix For: 0.8.3


 Replace OffsetCommit Request/Response with  org.apache.kafka.common.requests  
 equivalent



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2068) Replace OffsetCommit Request/Response with org.apache.kafka.common.requests equivalent

2015-04-10 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang reassigned KAFKA-2068:


Assignee: Guozhang Wang

 Replace OffsetCommit Request/Response with  org.apache.kafka.common.requests  
 equivalent
 

 Key: KAFKA-2068
 URL: https://issues.apache.org/jira/browse/KAFKA-2068
 Project: Kafka
  Issue Type: Sub-task
Reporter: Gwen Shapira
Assignee: Guozhang Wang
 Fix For: 0.8.3


 Replace OffsetCommit Request/Response with  org.apache.kafka.common.requests  
 equivalent



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1669) Default rebalance retries and backoff should be higher

2015-04-10 Thread Clark Haskins (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14490238#comment-14490238
 ] 

Clark Haskins commented on KAFKA-1669:
--

The existing consumer will likely stick around for quite some time. I think we 
should enhance the existing consumer with this.


-Clark

 Default rebalance retries and backoff should be higher
 --

 Key: KAFKA-1669
 URL: https://issues.apache.org/jira/browse/KAFKA-1669
 Project: Kafka
  Issue Type: Improvement
  Components: consumer
Reporter: Clark Haskins
Assignee: Mayuresh Gharat
  Labels: newbie
 Attachments: KAFKA-1669.patch


 The default rebalance logic does not work for consumers with large numbers of 
 partitions and/or topics. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Please add me to the contributor list

2015-04-10 Thread Vijay Bhat
Hi Kafka-ers,

My name is Vijay Bhat and I'm very interested in contributing to the Apache
Kafka project. A little about myself: My background is in CS and I've been
focusing on Hadoop / Big Data technologies for designing scalable data
infrastructures for the past few years. I also enjoy working on the data
science side of things (formulating, building, testing, deploying ML
models). I've also contributed a few patches to YARN, HDFS and Hadoop
Common.

I've read a lot about Kafka and had the chance to play around with it a
little for one of my projects, and it seems very cool! I'd love to
contribute to Kafka and do my part to help the project. I'd really
appreciate it if I could be added to the contributor list.

Thanks!
Vijay


Re: Please add me to the contributor list

2015-04-10 Thread Neha Narkhede
Hi Vijay,

Thanks for your interest in contributing to Kafka. Here is a link to some
newbie
https://issues.apache.org/jira/browse/KAFKA-2059?jql=project%20%3D%20KAFKA%20AND%20labels%20in%20(newbie%2C%20%22newbie%2B%2B%22)
JIRAs that you can start looking into. You can follow instructions to
contribute a patch here http://kafka.apache.org/contributing.html.

Thanks,
Neha

On Fri, Apr 10, 2015 at 1:44 PM, Vijay Bhat vijaysb...@gmail.com wrote:

 Hi Kafka-ers,

 My name is Vijay Bhat and I'm very interested in contributing to the Apache
 Kafka project. A little about myself: My background is in CS and I've been
 focusing on Hadoop / Big Data technologies for designing scalable data
 infrastructures for the past few years. I also enjoy working on the data
 science side of things (formulating, building, testing, deploying ML
 models). I've also contributed a few patches to YARN, HDFS and Hadoop
 Common.

 I've read a lot about Kafka and had the chance to play around with it a
 little for one of my projects, and it seems very cool! I'd love to
 contribute to Kafka and do my part to help the project. I'd really
 appreciate it if I could be added to the contributor list.

 Thanks!
 Vijay




-- 
Thanks,
Neha


KIP discussion Apr 15 at 9:30 am PST

2015-04-10 Thread Jun Rao
We plan to have a KIP discussion on Google hangout on Apr. 15 at 9:30am
PST. This is moved to a different time on Wed due to conflicts with
ApacheCon next week. If you are interested in participating and have not
already received a calendar invitation, please let me know. The following
is the agenda.

KIP-4 (admin commands): wrap up any remaining issues
KIP-11 (Authorization): Parth to give a quick overview.
KIP-12 (SSL/Kerberos): See if there is any blocker.

jira backlog assignment

Thanks,

Jun


[jira] [Updated] (KAFKA-1334) Add failure detection capability to the coordinator / consumer

2015-04-10 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-1334:
-
Reviewer: Guozhang Wang

 Add failure detection capability to the coordinator / consumer
 --

 Key: KAFKA-1334
 URL: https://issues.apache.org/jira/browse/KAFKA-1334
 Project: Kafka
  Issue Type: Sub-task
  Components: consumer
Affects Versions: 0.9.0
Reporter: Neha Narkhede
Assignee: Onur Karaman
 Attachments: KAFKA-1334.patch


 1) Add coordinator discovery and failure detection to the consumer.
 2) Add failure detection capability to the coordinator when group management 
 is used.
 This will not include any rebalancing logic, just the logic to detect 
 consumer failures using session.timeout.ms. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1660) Ability to call close() with a timeout on the Java Kafka Producer.

2015-04-10 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated KAFKA-1660:

Attachment: KAFKA-1660_2015-04-10_15:08:54.patch

 Ability to call close() with a timeout on the Java Kafka Producer. 
 ---

 Key: KAFKA-1660
 URL: https://issues.apache.org/jira/browse/KAFKA-1660
 Project: Kafka
  Issue Type: Improvement
  Components: clients, producer 
Affects Versions: 0.8.2.0
Reporter: Andrew Stein
Assignee: Jiangjie Qin
 Fix For: 0.8.3

 Attachments: KAFKA-1660.patch, KAFKA-1660.patch, 
 KAFKA-1660_2015-02-17_16:41:19.patch, KAFKA-1660_2015-03-02_10:41:49.patch, 
 KAFKA-1660_2015-03-08_21:14:50.patch, KAFKA-1660_2015-03-09_12:56:39.patch, 
 KAFKA-1660_2015-03-25_10:55:42.patch, KAFKA-1660_2015-03-27_16:35:42.patch, 
 KAFKA-1660_2015-04-07_18:18:40.patch, KAFKA-1660_2015-04-08_14:01:12.patch, 
 KAFKA-1660_2015-04-10_15:08:54.patch


 I would like the ability to call {{close}} with a timeout on the Java 
 Client's KafkaProducer.
 h6. Workaround
 Currently, it is possible to ensure that {{close}} will return quickly by 
 first doing a {{future.get(timeout)}} on the last future produced on each 
 partition, but this means that the user has to define the partitions up front 
 at the time of {{send}} and track the returned {{future}}'s



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 31850: Patch for KAFKA-1660

2015-04-10 Thread Jiangjie Qin

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31850/
---

(Updated April 10, 2015, 10:09 p.m.)


Review request for kafka.


Bugs: KAFKA-1660
https://issues.apache.org/jira/browse/KAFKA-1660


Repository: kafka


Description (updated)
---

A minor fix.


Incorporated Guozhang's comments.


Modify according to the latest conclusion.


Patch for the finally passed KIP-15git status


Addressed Joel and Guozhang's comments.


rebased on trunk


Rebase on trunk


Addressed Joel's comments.


Addressed Joel's comments


Diffs (updated)
-

  clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
b91e2c52ed0acb1faa85915097d97bafa28c413a 
  clients/src/main/java/org/apache/kafka/clients/producer/MockProducer.java 
6913090af03a455452b0b5c3df78f266126b3854 
  clients/src/main/java/org/apache/kafka/clients/producer/Producer.java 
5b3e75ed83a940bc922f9eca10d4008d67aa37c9 
  
clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordAccumulator.java
 0e7ab29a07301309aa8ca5b75b32b8e05ddc3a94 
  clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java 
70954cadb5c7e9a4c326afcf9d9a07db230e7db2 
  clients/src/main/java/org/apache/kafka/common/errors/InterruptException.java 
fee322fa0dd9704374db4a6964246a7d2918d3e4 
  clients/src/main/java/org/apache/kafka/common/serialization/Serializer.java 
c2fdc23239bd2196cd912c3d121b591f21393eab 
  
clients/src/test/java/org/apache/kafka/clients/producer/internals/RecordAccumulatorTest.java
 05e2929c0a9fc8cf2a8ebe0776ca62d5f3962d5c 
  core/src/test/scala/integration/kafka/api/ProducerSendTest.scala 
9811a2b2b1e9bf1beb301138f7626e12d275a8db 

Diff: https://reviews.apache.org/r/31850/diff/


Testing
---

Unit tests passed.


Thanks,

Jiangjie Qin



[jira] [Commented] (KAFKA-1660) Ability to call close() with a timeout on the Java Kafka Producer.

2015-04-10 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14490445#comment-14490445
 ] 

Jiangjie Qin commented on KAFKA-1660:
-

Updated reviewboard https://reviews.apache.org/r/31850/diff/
 against branch origin/trunk

 Ability to call close() with a timeout on the Java Kafka Producer. 
 ---

 Key: KAFKA-1660
 URL: https://issues.apache.org/jira/browse/KAFKA-1660
 Project: Kafka
  Issue Type: Improvement
  Components: clients, producer 
Affects Versions: 0.8.2.0
Reporter: Andrew Stein
Assignee: Jiangjie Qin
 Fix For: 0.8.3

 Attachments: KAFKA-1660.patch, KAFKA-1660.patch, 
 KAFKA-1660_2015-02-17_16:41:19.patch, KAFKA-1660_2015-03-02_10:41:49.patch, 
 KAFKA-1660_2015-03-08_21:14:50.patch, KAFKA-1660_2015-03-09_12:56:39.patch, 
 KAFKA-1660_2015-03-25_10:55:42.patch, KAFKA-1660_2015-03-27_16:35:42.patch, 
 KAFKA-1660_2015-04-07_18:18:40.patch, KAFKA-1660_2015-04-08_14:01:12.patch, 
 KAFKA-1660_2015-04-10_15:08:54.patch


 I would like the ability to call {{close}} with a timeout on the Java 
 Client's KafkaProducer.
 h6. Workaround
 Currently, it is possible to ensure that {{close}} will return quickly by 
 first doing a {{future.get(timeout)}} on the last future produced on each 
 partition, but this means that the user has to define the partitions up front 
 at the time of {{send}} and track the returned {{future}}'s



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 31850: Patch for KAFKA-1660

2015-04-10 Thread Jiangjie Qin


 On April 10, 2015, 4:36 p.m., Joel Koshy wrote:
  clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordAccumulator.java,
   line 394
  https://reviews.apache.org/r/31850/diff/7/?file=921104#file921104line394
 
  I was trying to find a case where it wouldn't work, but I think it 
  works as required.
  
  - Client thread 1 calls close
  - Client thread 2 calls append _before_ the accumulator is closed and 
  reaches at or after line 177
  - Client thread 1 marks the accumulator as closed
  - Sender thread comes to this point and aborts/clears batches.
  - Client thread 2 allocates and returns a new batch (and decrements the 
  appendsInProgress count)
  - Sender thread checks appendInProgress which returns false
  - Which is why we need the additional abortBatches after the loop.
  
  It is tricky though. I'm wondering if the following would work and is 
  simpler/clearer: make the post-condition of close be (i) the accumulator 
  closed flag is true  (ii) there are no pending appends.
  
  IOW in accumulator.close, set the flag to true and then wait until 
  there are no appendsInProgress. Do you think that would work?

Talked with Joel offline, blocking on close has issues if close(0) is called 
from callback. I added a comment to explain the tricky synchronization.


- Jiangjie


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31850/#review79677
---


On April 10, 2015, 10:09 p.m., Jiangjie Qin wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/31850/
 ---
 
 (Updated April 10, 2015, 10:09 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1660
 https://issues.apache.org/jira/browse/KAFKA-1660
 
 
 Repository: kafka
 
 
 Description
 ---
 
 A minor fix.
 
 
 Incorporated Guozhang's comments.
 
 
 Modify according to the latest conclusion.
 
 
 Patch for the finally passed KIP-15git status
 
 
 Addressed Joel and Guozhang's comments.
 
 
 rebased on trunk
 
 
 Rebase on trunk
 
 
 Addressed Joel's comments.
 
 
 Addressed Joel's comments
 
 
 Diffs
 -
 
   clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
 b91e2c52ed0acb1faa85915097d97bafa28c413a 
   clients/src/main/java/org/apache/kafka/clients/producer/MockProducer.java 
 6913090af03a455452b0b5c3df78f266126b3854 
   clients/src/main/java/org/apache/kafka/clients/producer/Producer.java 
 5b3e75ed83a940bc922f9eca10d4008d67aa37c9 
   
 clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordAccumulator.java
  0e7ab29a07301309aa8ca5b75b32b8e05ddc3a94 
   
 clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java 
 70954cadb5c7e9a4c326afcf9d9a07db230e7db2 
   
 clients/src/main/java/org/apache/kafka/common/errors/InterruptException.java 
 fee322fa0dd9704374db4a6964246a7d2918d3e4 
   clients/src/main/java/org/apache/kafka/common/serialization/Serializer.java 
 c2fdc23239bd2196cd912c3d121b591f21393eab 
   
 clients/src/test/java/org/apache/kafka/clients/producer/internals/RecordAccumulatorTest.java
  05e2929c0a9fc8cf2a8ebe0776ca62d5f3962d5c 
   core/src/test/scala/integration/kafka/api/ProducerSendTest.scala 
 9811a2b2b1e9bf1beb301138f7626e12d275a8db 
 
 Diff: https://reviews.apache.org/r/31850/diff/
 
 
 Testing
 ---
 
 Unit tests passed.
 
 
 Thanks,
 
 Jiangjie Qin
 




Re: Review Request 31850: Patch for KAFKA-1660

2015-04-10 Thread Guozhang Wang

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31850/#review79761
---

Ship it!


- Guozhang Wang


On April 10, 2015, 10:09 p.m., Jiangjie Qin wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/31850/
 ---
 
 (Updated April 10, 2015, 10:09 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1660
 https://issues.apache.org/jira/browse/KAFKA-1660
 
 
 Repository: kafka
 
 
 Description
 ---
 
 A minor fix.
 
 
 Incorporated Guozhang's comments.
 
 
 Modify according to the latest conclusion.
 
 
 Patch for the finally passed KIP-15git status
 
 
 Addressed Joel and Guozhang's comments.
 
 
 rebased on trunk
 
 
 Rebase on trunk
 
 
 Addressed Joel's comments.
 
 
 Addressed Joel's comments
 
 
 Diffs
 -
 
   clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
 b91e2c52ed0acb1faa85915097d97bafa28c413a 
   clients/src/main/java/org/apache/kafka/clients/producer/MockProducer.java 
 6913090af03a455452b0b5c3df78f266126b3854 
   clients/src/main/java/org/apache/kafka/clients/producer/Producer.java 
 5b3e75ed83a940bc922f9eca10d4008d67aa37c9 
   
 clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordAccumulator.java
  0e7ab29a07301309aa8ca5b75b32b8e05ddc3a94 
   
 clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java 
 70954cadb5c7e9a4c326afcf9d9a07db230e7db2 
   
 clients/src/main/java/org/apache/kafka/common/errors/InterruptException.java 
 fee322fa0dd9704374db4a6964246a7d2918d3e4 
   clients/src/main/java/org/apache/kafka/common/serialization/Serializer.java 
 c2fdc23239bd2196cd912c3d121b591f21393eab 
   
 clients/src/test/java/org/apache/kafka/clients/producer/internals/RecordAccumulatorTest.java
  05e2929c0a9fc8cf2a8ebe0776ca62d5f3962d5c 
   core/src/test/scala/integration/kafka/api/ProducerSendTest.scala 
 9811a2b2b1e9bf1beb301138f7626e12d275a8db 
 
 Diff: https://reviews.apache.org/r/31850/diff/
 
 
 Testing
 ---
 
 Unit tests passed.
 
 
 Thanks,
 
 Jiangjie Qin
 




Re: Please add me to the contributor list

2015-04-10 Thread Vijay Bhat
Thanks Neha.

The instructions on the page mention that I need to be added to the
contributor list before I can assign a JIRA to myself and begin work on it.
Is that not the procedure? Right now I don't see an Assign to me link on
the JIRAs (which I see in other Hadoop projects).

-Vijay

On Fri, Apr 10, 2015 at 1:49 PM, Neha Narkhede n...@confluent.io wrote:

 Hi Vijay,

 Thanks for your interest in contributing to Kafka. Here is a link to some
 newbie
 
 https://issues.apache.org/jira/browse/KAFKA-2059?jql=project%20%3D%20KAFKA%20AND%20labels%20in%20(newbie%2C%20%22newbie%2B%2B%22)
 
 JIRAs that you can start looking into. You can follow instructions to
 contribute a patch here http://kafka.apache.org/contributing.html.

 Thanks,
 Neha

 On Fri, Apr 10, 2015 at 1:44 PM, Vijay Bhat vijaysb...@gmail.com wrote:

  Hi Kafka-ers,
 
  My name is Vijay Bhat and I'm very interested in contributing to the
 Apache
  Kafka project. A little about myself: My background is in CS and I've
 been
  focusing on Hadoop / Big Data technologies for designing scalable data
  infrastructures for the past few years. I also enjoy working on the data
  science side of things (formulating, building, testing, deploying ML
  models). I've also contributed a few patches to YARN, HDFS and Hadoop
  Common.
 
  I've read a lot about Kafka and had the chance to play around with it a
  little for one of my projects, and it seems very cool! I'd love to
  contribute to Kafka and do my part to help the project. I'd really
  appreciate it if I could be added to the contributor list.
 
  Thanks!
  Vijay
 



 --
 Thanks,
 Neha



Re: Please add me to the contributor list

2015-04-10 Thread Jun Rao
Vijay,

Just added you to the contributor list.

Thanks,

Jun

On Fri, Apr 10, 2015 at 3:29 PM, Vijay Bhat vijaysb...@gmail.com wrote:

 Thanks Neha.

 The instructions on the page mention that I need to be added to the
 contributor list before I can assign a JIRA to myself and begin work on it.
 Is that not the procedure? Right now I don't see an Assign to me link on
 the JIRAs (which I see in other Hadoop projects).

 -Vijay

 On Fri, Apr 10, 2015 at 1:49 PM, Neha Narkhede n...@confluent.io wrote:

  Hi Vijay,
 
  Thanks for your interest in contributing to Kafka. Here is a link to some
  newbie
  
 
 https://issues.apache.org/jira/browse/KAFKA-2059?jql=project%20%3D%20KAFKA%20AND%20labels%20in%20(newbie%2C%20%22newbie%2B%2B%22)
  
  JIRAs that you can start looking into. You can follow instructions to
  contribute a patch here http://kafka.apache.org/contributing.html.
 
  Thanks,
  Neha
 
  On Fri, Apr 10, 2015 at 1:44 PM, Vijay Bhat vijaysb...@gmail.com
 wrote:
 
   Hi Kafka-ers,
  
   My name is Vijay Bhat and I'm very interested in contributing to the
  Apache
   Kafka project. A little about myself: My background is in CS and I've
  been
   focusing on Hadoop / Big Data technologies for designing scalable data
   infrastructures for the past few years. I also enjoy working on the
 data
   science side of things (formulating, building, testing, deploying ML
   models). I've also contributed a few patches to YARN, HDFS and Hadoop
   Common.
  
   I've read a lot about Kafka and had the chance to play around with it a
   little for one of my projects, and it seems very cool! I'd love to
   contribute to Kafka and do my part to help the project. I'd really
   appreciate it if I could be added to the contributor list.
  
   Thanks!
   Vijay
  
 
 
 
  --
  Thanks,
  Neha
 



Re: Please add me to the contributor list

2015-04-10 Thread Vijay Bhat
Thanks Jun!

On Fri, Apr 10, 2015 at 3:53 PM, Jun Rao j...@confluent.io wrote:

 Vijay,

 Just added you to the contributor list.

 Thanks,

 Jun

 On Fri, Apr 10, 2015 at 3:29 PM, Vijay Bhat vijaysb...@gmail.com wrote:

  Thanks Neha.
 
  The instructions on the page mention that I need to be added to the
  contributor list before I can assign a JIRA to myself and begin work on
 it.
  Is that not the procedure? Right now I don't see an Assign to me link
 on
  the JIRAs (which I see in other Hadoop projects).
 
  -Vijay
 
  On Fri, Apr 10, 2015 at 1:49 PM, Neha Narkhede n...@confluent.io
 wrote:
 
   Hi Vijay,
  
   Thanks for your interest in contributing to Kafka. Here is a link to
 some
   newbie
   
  
 
 https://issues.apache.org/jira/browse/KAFKA-2059?jql=project%20%3D%20KAFKA%20AND%20labels%20in%20(newbie%2C%20%22newbie%2B%2B%22)
   
   JIRAs that you can start looking into. You can follow instructions to
   contribute a patch here http://kafka.apache.org/contributing.html.
  
   Thanks,
   Neha
  
   On Fri, Apr 10, 2015 at 1:44 PM, Vijay Bhat vijaysb...@gmail.com
  wrote:
  
Hi Kafka-ers,
   
My name is Vijay Bhat and I'm very interested in contributing to the
   Apache
Kafka project. A little about myself: My background is in CS and I've
   been
focusing on Hadoop / Big Data technologies for designing scalable
 data
infrastructures for the past few years. I also enjoy working on the
  data
science side of things (formulating, building, testing, deploying ML
models). I've also contributed a few patches to YARN, HDFS and Hadoop
Common.
   
I've read a lot about Kafka and had the chance to play around with
 it a
little for one of my projects, and it seems very cool! I'd love to
contribute to Kafka and do my part to help the project. I'd really
appreciate it if I could be added to the contributor list.
   
Thanks!
Vijay
   
  
  
  
   --
   Thanks,
   Neha
  
 



Re: [DISCUSS] KIP-18 - JBOD Support

2015-04-10 Thread Jun Rao
Andrii,

1. I was wondering what if the controller fails over after step 4). Since
the ZK node is gone, how does the controller know those failed replicas due
to disk failures? Otherwise, the controller will assume those replicas are
alive again.

2. Just to clarify. In the proposal, those failed replicas will not be auto
repaired and those affected partitions will just be running in the under
replicated mode, right? To repair the failed replicas, the admin still
needs to stop the broker?

Thanks,

Jun



On Fri, Apr 10, 2015 at 10:29 AM, Andrii Biletskyi 
andrii.bilets...@stealth.ly wrote:

 Todd, Jun,

 Thanks for comments.

 I agree we might want to change fair on disk partition assignment
 in scope of these changes. I'm open to suggestions, I didn't bring it
 up here because of the facts that Todd mentioned - there is still  no
 clear understanding who should be responsible for assignment -
 broker or controller.

 1. Yes, the way broker initiates partition restart should be discussed.
 But I don't understand the problem with controller failover. The intended
 workflow is the following:
 0) On error Broker removes partitions from ReplicaManager and LogManager
 1) Broker creates zk node
 2) Controller picks up, re-generates leaders and followers for partitions
 3) Controller sends new LeaderAndIsr and UpdateMetadata to the cluster
 4) Controller deletes zk node
 Now, if controller fails between 3) and 4), yes, controller will send LI
 requests twice, but broker which requested partition restart will ignore
 second time because partition would have been created at that point -
 while handling first LI request.

 2. The main benefit, from my perspective, is that if currently any file
 IO error means broker halts, you have to remove disk, restart the broker,
 with this KIP on IO error we simply reject that single request (or any
 action during
 which file IO error occurred), broker detects affected partitions and
 silently
 restarts them, normally handling other requests at the same time (of course
 if those are not related to the broken disk).

 3. I agree, the lack of tools to perform such operational commands won't
 let us
 fully leverage JBOD architecture. That's why I think we should design it
 that
 way so implementing such tools must be a simple thing to do. But before
 that
 it'd be good to understand whether we are on the right path in general.

 Thanks,
 Andrii Biletskyi

 On Fri, Apr 10, 2015 at 6:56 PM, Jun Rao j...@confluent.io wrote:

  Andrii,
 
  Thanks for writing up the proposal. A few thoughts on this.
 
  1. Your proposal is to have the broker notify the controller about failed
  replicas. We need to think through this a bit more. The controller may
 fail
  later. During the controller failover, it needs to be able to detect
 those
  failed replicas again. Otherwise, it may revert some of the decisions
 that
  it has made earlier. In the current proposal, it seems that the info
 about
  the failed replicas will be lost during controller failover?
 
  2. Overall, it's not very clear to me what benefit this proposal
 provides.
  The proposal seems to detect failed disks and then just marks the
  associated replicas as offline. How do we bring those replicas to online
  again? Do we have to stop the broker and either fix the failed disk or
  remove it from the configured log dir? If so, there will still be a down
  time of the broker. The changes in the proposal is non-trivial. So, we
 need
  to be certain that we get some significant benefits.
 
  3. As Todd pointed out, it will be worth thinking through other issues
  related to JBOD.
 
  Thanks,
 
  Jun
 
  On Thu, Apr 9, 2015 at 5:36 AM, Andrii Biletskyi 
  andrii.bilets...@stealth.ly wrote:
 
   Hi,
  
   Let me start discussion thread for KIP-18 - JBOD Support.
  
   Link to wiki:
  
 https://cwiki.apache.org/confluence/display/KAFKA/KIP-18+-+JBOD+Support
  
  
   Thanks,
   Andrii Biletskyi
  
 



[jira] [Commented] (KAFKA-2068) Replace OffsetCommit Request/Response with org.apache.kafka.common.requests equivalent

2015-04-10 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14490534#comment-14490534
 ] 

Jun Rao commented on KAFKA-2068:


[~guozhang], do you plan to take this on? Thanks,

 Replace OffsetCommit Request/Response with  org.apache.kafka.common.requests  
 equivalent
 

 Key: KAFKA-2068
 URL: https://issues.apache.org/jira/browse/KAFKA-2068
 Project: Kafka
  Issue Type: Sub-task
Reporter: Gwen Shapira
 Fix For: 0.8.3


 Replace OffsetCommit Request/Response with  org.apache.kafka.common.requests  
 equivalent



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1927) Replace requests in kafka.api with requests in org.apache.kafka.common.requests

2015-04-10 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-1927:
---
Status: In Progress  (was: Patch Available)

Actual patch is now distributed in sub jiras.

 Replace requests in kafka.api with requests in 
 org.apache.kafka.common.requests
 ---

 Key: KAFKA-1927
 URL: https://issues.apache.org/jira/browse/KAFKA-1927
 Project: Kafka
  Issue Type: Improvement
Reporter: Jay Kreps
Assignee: Gwen Shapira
 Fix For: 0.8.3

 Attachments: KAFKA-1927.patch


 The common package introduced a better way of defining requests using a new 
 protocol definition DSL and also includes wrapper objects for these.
 We should switch KafkaApis over to use these request definitions and consider 
 the scala classes deprecated (we probably need to retain some of them for a 
 while for the scala clients).
 This will be a big improvement because
 1. We will have each request now defined in only one place (Protocol.java)
 2. We will have built-in support for multi-version requests
 3. We will have much better error messages (no more cryptic underflow errors)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 33017: Fix timing issue in MetadataTest

2015-04-10 Thread Jiangjie Qin


 On April 10, 2015, 9:52 p.m., Jiangjie Qin wrote:
  Ship It!

BTW, I think KafkaProducer has the same issue there. Could you update the code 
there as well?


- Jiangjie


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33017/#review79756
---


On April 9, 2015, 3:35 p.m., Rajini Sivaram wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/33017/
 ---
 
 (Updated April 9, 2015, 3:35 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-2089
 https://issues.apache.org/jira/browse/KAFKA-2089
 
 
 Repository: kafka
 
 
 Description
 ---
 
 Patch for KAFKA-2089: fix timing issue in MetadataTest
 
 
 Diffs
 -
 
   clients/src/test/java/org/apache/kafka/clients/MetadataTest.java 
 928087d29deb80655ca83726c1ebc45d76468c1f 
 
 Diff: https://reviews.apache.org/r/33017/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Rajini Sivaram
 




Re: Review Request 33049: Patch for KAFKA-2084

2015-04-10 Thread Aditya Auradkar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33049/
---

(Updated April 11, 2015, 12:24 a.m.)


Review request for kafka.


Bugs: KAFKA-2084
https://issues.apache.org/jira/browse/KAFKA-2084


Repository: kafka


Description (updated)
---

This is currently not being used anywhere in the code because I haven't yet 
figured out how to enforce delays i.e. purgatory vs delay queue. I'll have a 
better idea once I look at the new purgatory implementation. Hopefully, this 
smaller patch is easier to review.

Added more testcases


Some locking changes for reading/creating the sensors


Diffs (updated)
-

  clients/src/main/java/org/apache/kafka/common/metrics/MetricConfig.java 
dfa1b0a11042ad9d127226f0e0cec8b1d42b8441 
  clients/src/main/java/org/apache/kafka/common/metrics/Metrics.java 
b3d3d7c56acb445be16a3fbe00f05eaba659be46 
  clients/src/main/java/org/apache/kafka/common/metrics/Quota.java 
d82bb0c055e631425bc1ebbc7d387baac76aeeaa 
  
clients/src/main/java/org/apache/kafka/common/metrics/QuotaViolationException.java
 a451e5385c9eca76b38b425e8ac856b2715fcffe 
  clients/src/main/java/org/apache/kafka/common/metrics/Sensor.java 
ca823fd4639523018311b814fde69b6177e73b97 
  core/src/main/scala/kafka/server/ClientQuotaMetrics.scala PRE-CREATION 
  core/src/main/scala/kafka/server/KafkaConfig.scala 
69b772c1941865fbe15b34bb2784c511f8ce519a 
  core/src/main/scala/kafka/server/KafkaServer.scala 
c63f4ba9d622817ea8636d4e6135fba917ce085a 
  core/src/test/scala/unit/kafka/server/ClientQuotaMetricsTest.scala 
PRE-CREATION 

Diff: https://reviews.apache.org/r/33049/diff/


Testing
---


Thanks,

Aditya Auradkar



[jira] [Commented] (KAFKA-2084) byte rate metrics per client ID (producer and consumer)

2015-04-10 Thread Aditya A Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14490600#comment-14490600
 ] 

Aditya A Auradkar commented on KAFKA-2084:
--

Updated reviewboard https://reviews.apache.org/r/33049/diff/
 against branch trunk

 byte rate metrics per client ID (producer and consumer)
 ---

 Key: KAFKA-2084
 URL: https://issues.apache.org/jira/browse/KAFKA-2084
 Project: Kafka
  Issue Type: Sub-task
Reporter: Aditya Auradkar
Assignee: Aditya Auradkar
 Attachments: KAFKA-2084.patch, KAFKA-2084_2015-04-09_18:10:56.patch, 
 KAFKA-2084_2015-04-10_17:24:34.patch


 We need to be able to track the bytes-in/bytes-out rate on a per-client ID 
 basis. This is necessary for quotas.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 33049: Patch for KAFKA-2084

2015-04-10 Thread Aditya Auradkar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33049/
---

(Updated April 11, 2015, 12:25 a.m.)


Review request for kafka.


Bugs: KAFKA-2084
https://issues.apache.org/jira/browse/KAFKA-2084


Repository: kafka


Description (updated)
---

WIP: First patch for quotas. Changes are 
1. Adding per-client throttle time and quota metrics in 
ClientQuotaMetrics.scala 
2. Making changes in QuotaViolationException and Sensor to return delay time 
changes. 
3. Added configuration needed so far for quotas in KafkaConfig. 
4. Unit tests

This is currently not being used anywhere in the code because I haven't yet 
figured out how to enforce delays i.e. purgatory vs delay queue. I'll have a 
better idea once I look at the new purgatory implementation. Hopefully, this 
smaller patch is easier to review.

Added more testcases


Some locking changes for reading/creating the sensors


Diffs
-

  clients/src/main/java/org/apache/kafka/common/metrics/MetricConfig.java 
dfa1b0a11042ad9d127226f0e0cec8b1d42b8441 
  clients/src/main/java/org/apache/kafka/common/metrics/Metrics.java 
b3d3d7c56acb445be16a3fbe00f05eaba659be46 
  clients/src/main/java/org/apache/kafka/common/metrics/Quota.java 
d82bb0c055e631425bc1ebbc7d387baac76aeeaa 
  
clients/src/main/java/org/apache/kafka/common/metrics/QuotaViolationException.java
 a451e5385c9eca76b38b425e8ac856b2715fcffe 
  clients/src/main/java/org/apache/kafka/common/metrics/Sensor.java 
ca823fd4639523018311b814fde69b6177e73b97 
  core/src/main/scala/kafka/server/ClientQuotaMetrics.scala PRE-CREATION 
  core/src/main/scala/kafka/server/KafkaConfig.scala 
69b772c1941865fbe15b34bb2784c511f8ce519a 
  core/src/main/scala/kafka/server/KafkaServer.scala 
c63f4ba9d622817ea8636d4e6135fba917ce085a 
  core/src/test/scala/unit/kafka/server/ClientQuotaMetricsTest.scala 
PRE-CREATION 

Diff: https://reviews.apache.org/r/33049/diff/


Testing
---


Thanks,

Aditya Auradkar



[jira] [Updated] (KAFKA-2084) byte rate metrics per client ID (producer and consumer)

2015-04-10 Thread Aditya A Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya A Auradkar updated KAFKA-2084:
-
Attachment: KAFKA-2084_2015-04-10_17:24:34.patch

 byte rate metrics per client ID (producer and consumer)
 ---

 Key: KAFKA-2084
 URL: https://issues.apache.org/jira/browse/KAFKA-2084
 Project: Kafka
  Issue Type: Sub-task
Reporter: Aditya Auradkar
Assignee: Aditya Auradkar
 Attachments: KAFKA-2084.patch, KAFKA-2084_2015-04-09_18:10:56.patch, 
 KAFKA-2084_2015-04-10_17:24:34.patch


 We need to be able to track the bytes-in/bytes-out rate on a per-client ID 
 basis. This is necessary for quotas.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2102) Remove unnecessary synchronization when managing metadata

2015-04-10 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14490624#comment-14490624
 ] 

Jiangjie Qin commented on KAFKA-2102:
-

I'm with Ewen on this. We probably can take a further look at the way we do the 
patch. Finer locking makes sense only if it provides good performance 
improvement. Here are some thinking:
1. For infrequent method call, we don't need fine lock.
2. For frequent method call, we can check if they really need lock to each 
other.
For metadata, most of the methods are infrequently called, so putting 
synchronized methods are probably fine. There are two exceptions:
1. timeToNextUpdate() - only called by sender thread.
2. containsTopic() - called by caller thread.
Those two methods does not conflict with each other at all. and 1) is only 
called by sender thread. So maybe we can try just replace topics with a 
concurrent hashset and remove the synchronization on containsTopic().

 Remove unnecessary synchronization when managing metadata
 -

 Key: KAFKA-2102
 URL: https://issues.apache.org/jira/browse/KAFKA-2102
 Project: Kafka
  Issue Type: Improvement
Reporter: Tim Brooks
Assignee: Tim Brooks
 Attachments: KAFKA-2102.patch, KAFKA-2102_2015-04-08_00:20:33.patch


 Usage of the org.apache.kafka.clients.Metadata class is synchronized. It 
 seems like the current functionality could be maintained without 
 synchronizing the whole class.
 I have been working on improving this by moving to finer grained locks and 
 using atomic operations. My initial benchmarking of the producer is that this 
 will improve latency (using HDRHistogram) on submitting messages.
 I have produced an initial patch. I do not necessarily believe this is 
 complete. And I want to definitely produce some more benchmarks. However, I 
 wanted to get early feedback because this change could be deceptively tricky.
 I am interested in knowing if this is:
 1. Something that is of interest to the maintainers/community.
 2. Along the right track
 3. If there are any gotchas that make my current approach naive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 31606: Patch for KAFKA-1416

2015-04-10 Thread Flutra Osmani

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31606/
---

(Updated April 11, 2015, 1:36 a.m.)


Review request for kafka.


Bugs: KAFKA-1416
https://issues.apache.org/jira/browse/KAFKA-1416


Repository: kafka


Description (updated)
---

Unified get and send messages in TestUtils.scala and its users

changing sendMessages() signature from KafkaConfig to KafkaServer


 KAFKA-1416 Unify sendMessages/getMessages in unit tests

 Unified get and send messages in TestUtils.scala and its users


Diffs (updated)
-

  core/src/test/scala/unit/kafka/consumer/ZookeeperConsumerConnectorTest.scala 
f3e76db5dcaaae9d301eb47bface83cc62ed87d6 
  core/src/test/scala/unit/kafka/integration/FetcherTest.scala 
0dc837a402953c9c22578599a20db4cf271524cc 
  core/src/test/scala/unit/kafka/integration/UncleanLeaderElectionTest.scala 
a1300894258c0ee77dffc96df24a2f7369eb68da 
  
core/src/test/scala/unit/kafka/javaapi/consumer/ZookeeperConsumerConnectorTest.scala
 ad66bb208b6d054784a5c81f82177b35036c3c14 
  core/src/test/scala/unit/kafka/metrics/MetricsTest.scala 
247a6e947670090a4413af1357897ac440072db4 
  core/src/test/scala/unit/kafka/utils/TestUtils.scala 
5a9e84d44f6567c3a01a4e068c751edb07ee9634 

Diff: https://reviews.apache.org/r/31606/diff/


Testing
---


Thanks,

Flutra Osmani



[jira] [Commented] (KAFKA-1416) Unify sendMessages/getMessages in unit tests

2015-04-10 Thread Flutra Osmani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14490697#comment-14490697
 ] 

Flutra Osmani commented on KAFKA-1416:
--

Updated reviewboard https://reviews.apache.org/r/31606/diff/
 against branch origin/trunk

 Unify sendMessages/getMessages in unit tests
 

 Key: KAFKA-1416
 URL: https://issues.apache.org/jira/browse/KAFKA-1416
 Project: Kafka
  Issue Type: Bug
Reporter: Guozhang Wang
Assignee: Flutra Osmani
  Labels: newbie
 Attachments: KAFKA-1416.patch, KAFKA-1416_2015-03-01_17:24:55.patch, 
 KAFKA-1416_2015-03-26_00:20:36.patch, KAFKA-1416_2015-04-10_18:36:10.patch


 Multiple unit tests have its own internal function to send/get messages from 
 the brokers. For example:
 sendMessages in ZookeeperConsumerConnectorTest
 produceMessage in UncleanLeaderElectionTest
 sendMessages in FetcherTest
 etc
 It is better to unify them in TestUtils.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1416) Unify sendMessages/getMessages in unit tests

2015-04-10 Thread Flutra Osmani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flutra Osmani updated KAFKA-1416:
-
Status: Patch Available  (was: In Progress)

 Unify sendMessages/getMessages in unit tests
 

 Key: KAFKA-1416
 URL: https://issues.apache.org/jira/browse/KAFKA-1416
 Project: Kafka
  Issue Type: Bug
Reporter: Guozhang Wang
Assignee: Flutra Osmani
  Labels: newbie
 Attachments: KAFKA-1416.patch, KAFKA-1416_2015-03-01_17:24:55.patch, 
 KAFKA-1416_2015-03-26_00:20:36.patch, KAFKA-1416_2015-04-10_18:36:10.patch


 Multiple unit tests have its own internal function to send/get messages from 
 the brokers. For example:
 sendMessages in ZookeeperConsumerConnectorTest
 produceMessage in UncleanLeaderElectionTest
 sendMessages in FetcherTest
 etc
 It is better to unify them in TestUtils.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1416) Unify sendMessages/getMessages in unit tests

2015-04-10 Thread Flutra Osmani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flutra Osmani updated KAFKA-1416:
-
Attachment: KAFKA-1416_2015-04-10_18:36:10.patch

 Unify sendMessages/getMessages in unit tests
 

 Key: KAFKA-1416
 URL: https://issues.apache.org/jira/browse/KAFKA-1416
 Project: Kafka
  Issue Type: Bug
Reporter: Guozhang Wang
Assignee: Flutra Osmani
  Labels: newbie
 Attachments: KAFKA-1416.patch, KAFKA-1416_2015-03-01_17:24:55.patch, 
 KAFKA-1416_2015-03-26_00:20:36.patch, KAFKA-1416_2015-04-10_18:36:10.patch


 Multiple unit tests have its own internal function to send/get messages from 
 the brokers. For example:
 sendMessages in ZookeeperConsumerConnectorTest
 produceMessage in UncleanLeaderElectionTest
 sendMessages in FetcherTest
 etc
 It is better to unify them in TestUtils.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 31606: Patch for KAFKA-1416

2015-04-10 Thread Flutra Osmani


 On April 8, 2015, 9:56 p.m., Guozhang Wang wrote:
  core/src/test/scala/unit/kafka/utils/TestUtils.scala, line 794
  https://reviews.apache.org/r/31606/diff/2-3/?file=881937#file881937line794
 
  Default Partition: use the topic string as the key to determine the 
  partition

Yes, this is already implemented this way. If no partition is specified, then 
the topic string is used as the key to determine partition.


 On April 8, 2015, 9:56 p.m., Guozhang Wang wrote:
  core/src/test/scala/unit/kafka/server/LogRecoveryTest.scala, lines 210-212
  https://reviews.apache.org/r/31606/diff/3/?file=906529#file906529line210
 
  How about just calling TestUtils.sendMessages directly?

I reverted my changes here, since Ewen introduced a new updateProducer() call 
in LogRecoveryTest everytime some messages are sent.


- Flutra


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31606/#review79438
---


On April 11, 2015, 1:36 a.m., Flutra Osmani wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/31606/
 ---
 
 (Updated April 11, 2015, 1:36 a.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1416
 https://issues.apache.org/jira/browse/KAFKA-1416
 
 
 Repository: kafka
 
 
 Description
 ---
 
 Unified get and send messages in TestUtils.scala and its users
 
 changing sendMessages() signature from KafkaConfig to KafkaServer
 
 
  KAFKA-1416 Unify sendMessages/getMessages in unit tests
 
  Unified get and send messages in TestUtils.scala and its users
 
 
 Diffs
 -
 
   
 core/src/test/scala/unit/kafka/consumer/ZookeeperConsumerConnectorTest.scala 
 f3e76db5dcaaae9d301eb47bface83cc62ed87d6 
   core/src/test/scala/unit/kafka/integration/FetcherTest.scala 
 0dc837a402953c9c22578599a20db4cf271524cc 
   core/src/test/scala/unit/kafka/integration/UncleanLeaderElectionTest.scala 
 a1300894258c0ee77dffc96df24a2f7369eb68da 
   
 core/src/test/scala/unit/kafka/javaapi/consumer/ZookeeperConsumerConnectorTest.scala
  ad66bb208b6d054784a5c81f82177b35036c3c14 
   core/src/test/scala/unit/kafka/metrics/MetricsTest.scala 
 247a6e947670090a4413af1357897ac440072db4 
   core/src/test/scala/unit/kafka/utils/TestUtils.scala 
 5a9e84d44f6567c3a01a4e068c751edb07ee9634 
 
 Diff: https://reviews.apache.org/r/31606/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Flutra Osmani
 




Re: Review Request 31606: Patch for KAFKA-1416

2015-04-10 Thread Flutra Osmani


 On March 25, 2015, 9:51 p.m., Guozhang Wang wrote:
  core/src/test/scala/unit/kafka/integration/FetcherTest.scala, line 85
  https://reviews.apache.org/r/31606/diff/2/?file=881933#file881933line85
 
  Import TestUtils.sendMessages

It seems that this is a practice from all committers throughout the code: to 
use TestUtils.sendMessages() instead of importing it first. I tried it, but it 
is not recognized as a method from TestUtils.


 On March 25, 2015, 9:51 p.m., Guozhang Wang wrote:
  core/src/test/scala/unit/kafka/utils/TestUtils.scala, lines 761-773
  https://reviews.apache.org/r/31606/diff/2/?file=881937#file881937line761
 
  Compression code is no longer used anymore, which seems not correct?
 
 Flutra Osmani wrote:
 The compression.codec is now set on sendMessages()
 
 Guozhang Wang wrote:
 Isn't this function sendMessages()? Now when we call createProducer() we 
 will not pass in the props anymore and hence it will always use default 
 values, is that OK?

We are still passing the producer properties in the createProducer() call. Not 
sure why this code diff snippet ommits it. The latest commit contains props in 
createProducer() call.


- Flutra


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31606/#review77804
---


On April 11, 2015, 1:36 a.m., Flutra Osmani wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/31606/
 ---
 
 (Updated April 11, 2015, 1:36 a.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1416
 https://issues.apache.org/jira/browse/KAFKA-1416
 
 
 Repository: kafka
 
 
 Description
 ---
 
 Unified get and send messages in TestUtils.scala and its users
 
 changing sendMessages() signature from KafkaConfig to KafkaServer
 
 
  KAFKA-1416 Unify sendMessages/getMessages in unit tests
 
  Unified get and send messages in TestUtils.scala and its users
 
 
 Diffs
 -
 
   
 core/src/test/scala/unit/kafka/consumer/ZookeeperConsumerConnectorTest.scala 
 f3e76db5dcaaae9d301eb47bface83cc62ed87d6 
   core/src/test/scala/unit/kafka/integration/FetcherTest.scala 
 0dc837a402953c9c22578599a20db4cf271524cc 
   core/src/test/scala/unit/kafka/integration/UncleanLeaderElectionTest.scala 
 a1300894258c0ee77dffc96df24a2f7369eb68da 
   
 core/src/test/scala/unit/kafka/javaapi/consumer/ZookeeperConsumerConnectorTest.scala
  ad66bb208b6d054784a5c81f82177b35036c3c14 
   core/src/test/scala/unit/kafka/metrics/MetricsTest.scala 
 247a6e947670090a4413af1357897ac440072db4 
   core/src/test/scala/unit/kafka/utils/TestUtils.scala 
 5a9e84d44f6567c3a01a4e068c751edb07ee9634 
 
 Diff: https://reviews.apache.org/r/31606/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Flutra Osmani