Review Request 31816: Fix for KAFKA-527

2015-03-06 Thread Guozhang Wang

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31816/
---

Review request for kafka.


Bugs: KAFKA-527
https://issues.apache.org/jira/browse/KAFKA-527


Repository: kafka


Description
---

Avoid double copying on decompress


Diffs
-

  core/src/main/scala/kafka/consumer/ConsumerIterator.scala 
ac491b4da2583ef7227c67f5b8bc0fd731d705c3 
  core/src/main/scala/kafka/message/ByteBufferMessageSet.scala 
788c7864bc881b935975ab4a4e877b690e65f1f1 
  core/src/test/scala/unit/kafka/message/MessageCompressionTest.scala 
6f0addcea64f1e78a4de50ec8135f4d02cebd305 
  core/src/test/scala/unit/kafka/producer/SyncProducerTest.scala 
24deea06753e5358aa341c589ca7a7704317e29c 

Diff: https://reviews.apache.org/r/31816/diff/


Testing
---

Unit tests


Thanks,

Guozhang Wang



[jira] [Commented] (KAFKA-1660) Ability to call close() with a timeout on the Java Kafka Producer.

2015-03-06 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14351186#comment-14351186
 ] 

Jiangjie Qin commented on KAFKA-1660:
-

Thanks for the quick response [~parth.brahmbhatt], I'll take this over as 
mirror maker would need this close call for no data loss.

 Ability to call close() with a timeout on the Java Kafka Producer. 
 ---

 Key: KAFKA-1660
 URL: https://issues.apache.org/jira/browse/KAFKA-1660
 Project: Kafka
  Issue Type: Improvement
  Components: clients, producer 
Affects Versions: 0.8.2.0
Reporter: Andrew Stein
Assignee: Parth Brahmbhatt
 Fix For: 0.8.3

 Attachments: KAFKA-1660.patch, KAFKA-1660_2015-02-17_16:41:19.patch, 
 KAFKA-1660_2015-03-02_10:41:49.patch


 I would like the ability to call {{close}} with a timeout on the Java 
 Client's KafkaProducer.
 h6. Workaround
 Currently, it is possible to ensure that {{close}} will return quickly by 
 first doing a {{future.get(timeout)}} on the last future produced on each 
 partition, but this means that the user has to define the partitions up front 
 at the time of {{send}} and track the returned {{future}}'s



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 31650: Drag Coordinator and FetchManager out of KafkaConsumer, fix a bunch of consumer test issues

2015-03-06 Thread Guozhang Wang


 On March 5, 2015, 11:39 p.m., Onur Karaman wrote:
  clients/src/test/java/org/apache/kafka/clients/consumer/internals/CoordinatorTest.java,
   line 83
  https://reviews.apache.org/r/31650/diff/2/?file=886350#file886350line83
 
  I think these scenarios should be split up into separate tests.
 
 Guozhang Wang wrote:
 The general law of defining unit test cases is by functionality instead 
 of scenarios, hence I think it is OK to group them in one test.
 
 Onur Karaman wrote:
 Grouping them means that if one of the earlier scenarios fail, then the 
 later scenarios will not be tested. So we don't know if only that one 
 scenario failed or if multiple later scenarios would have failed.

Hmm.. I think it depends on whether we want to trouble shoot failed tests 
one-at-a-time or just check if one of the tests fail in that test case, fix 
that one and retry. For this case I agree that it may be better to get 
knowledge of all failed cases. Will split them and check-in.


- Guozhang


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31650/#review75406
---


On March 5, 2015, 10:57 p.m., Guozhang Wang wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/31650/
 ---
 
 (Updated March 5, 2015, 10:57 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1910
 https://issues.apache.org/jira/browse/KAFKA-1910
 
 
 Repository: kafka
 
 
 Description
 ---
 
 See comments in KAFKA-1910;
 
 Updated RB includes unit test for Coordinator / FetchManager / Heartbeat and 
 a couple changes on MemoryRecords and test utils.
 
 
 Diffs
 -
 
   clients/src/main/java/org/apache/kafka/clients/CommonClientConfigs.java 
 06fcfe62cc1fe76f58540221698ef076fe150e96 
   clients/src/main/java/org/apache/kafka/clients/KafkaClient.java 
 8a3e55aaff7d8c26e56a8407166a4176c1da2644 
   clients/src/main/java/org/apache/kafka/clients/NetworkClient.java 
 a7fa4a9dfbcfbc4d9e9259630253cbcded158064 
   clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java 
 5fb21001abd77cac839bd724afa04e377a3e82aa 
   clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
 67ceb754a52c07143c69b053fe128b9e24060b99 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/internals/Coordinator.java
  PRE-CREATION 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/internals/FetchManager.java
  PRE-CREATION 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/internals/Heartbeat.java
  ee0751e4949120d114202c2299d49612a89b9d97 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/internals/SubscriptionState.java
  d41d3068c11d4b5c640467dc0ae1b7c20a8d128c 
   clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
 7397e565fd865214529ffccadd4222d835ac8110 
   clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
 122375c473bf73caf05299b9f5174c6b226ca863 
   
 clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java 
 ed9c63a6679e3aaf83d19fde19268553a4c107c2 
   clients/src/main/java/org/apache/kafka/common/network/Selector.java 
 6baad9366a1975dbaba1786da91efeaa38533319 
   clients/src/main/java/org/apache/kafka/common/protocol/Errors.java 
 ad2171f5417c93194f5f234bdc7fdd0b8d59a8a8 
   clients/src/main/java/org/apache/kafka/common/record/MemoryRecords.java 
 083e7a39249ab56a73a014b106876244d619f189 
   clients/src/main/java/org/apache/kafka/common/requests/FetchResponse.java 
 e67c4c8332cb1dd3d9cde5de687df7760045dfe6 
   
 clients/src/main/java/org/apache/kafka/common/requests/HeartbeatResponse.java 
 0057496228feeeccbc0c009a42f5268fa2cb8611 
   
 clients/src/main/java/org/apache/kafka/common/requests/JoinGroupRequest.java 
 8c50e9be534c61ecf56106bf2b68cf678ea50d66 
   
 clients/src/main/java/org/apache/kafka/common/requests/JoinGroupResponse.java 
 52b1803d8b558c1eeb978ba8821496c7d3c20a6b 
   
 clients/src/main/java/org/apache/kafka/common/requests/ListOffsetResponse.java
  cfac47a4a05dc8a535595542d93e55237b7d1e93 
   
 clients/src/main/java/org/apache/kafka/common/requests/MetadataResponse.java 
 90f31413d7d80a06c0af359009cc271aa0c67be3 
   
 clients/src/main/java/org/apache/kafka/common/requests/OffsetCommitResponse.java
  4d3b9ececee4b4c0b50ba99da2ddbbb15f9cc08d 
   
 clients/src/main/java/org/apache/kafka/common/requests/OffsetFetchResponse.java
  edbed5880dc44fc178737a5e298c106a00f38443 
   clients/src/main/java/org/apache/kafka/common/requests/ProduceResponse.java 
 a00dcdf15d1c7bac7228be140647bd7d849deb9b 
   clients/src/test/java/org/apache/kafka/clients/MockClient.java 
 8f1a7a625e4eeafa44bbf9e5cff987de86c949be 
   
 

[jira] [Commented] (KAFKA-1660) Ability to call close() with a timeout on the Java Kafka Producer.

2015-03-06 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14351159#comment-14351159
 ] 

Jiangjie Qin commented on KAFKA-1660:
-

[~parth.brahmbhatt] Have you got a chance to work on a KIP for this?

 Ability to call close() with a timeout on the Java Kafka Producer. 
 ---

 Key: KAFKA-1660
 URL: https://issues.apache.org/jira/browse/KAFKA-1660
 Project: Kafka
  Issue Type: Improvement
  Components: clients, producer 
Affects Versions: 0.8.2.0
Reporter: Andrew Stein
Assignee: Parth Brahmbhatt
 Fix For: 0.8.3

 Attachments: KAFKA-1660.patch, KAFKA-1660_2015-02-17_16:41:19.patch, 
 KAFKA-1660_2015-03-02_10:41:49.patch


 I would like the ability to call {{close}} with a timeout on the Java 
 Client's KafkaProducer.
 h6. Workaround
 Currently, it is possible to ensure that {{close}} will return quickly by 
 first doing a {{future.get(timeout)}} on the last future produced on each 
 partition, but this means that the user has to define the partitions up front 
 at the time of {{send}} and track the returned {{future}}'s



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2003) Add upgrade tests

2015-03-06 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14351184#comment-14351184
 ] 

Guozhang Wang commented on KAFKA-2003:
--

Are these three tickets concerning the same testing scenario? KAFKA-1888, 
KAFKA-1898, KAFKA-2003.

 Add upgrade tests
 -

 Key: KAFKA-2003
 URL: https://issues.apache.org/jira/browse/KAFKA-2003
 Project: Kafka
  Issue Type: Improvement
Reporter: Gwen Shapira
Assignee: Ashish K Singh

 To test protocol changes, compatibility and upgrade process, we need a good 
 way to test different versions of the product together and to test end-to-end 
 upgrade process.
 For example, for 0.8.2 to 0.8.3 test we want to check:
 * Can we start a cluster with a mix of 0.8.2 and 0.8.3 brokers?
 * Can a cluster of 0.8.3 brokers bump the protocol level one broker at a time?
 * Can 0.8.2 clients run against a cluster of 0.8.3 brokers?
 There are probably more questions. But an automated framework that can test 
 those and report results will be a good start.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1660) Ability to call close() with a timeout on the Java Kafka Producer.

2015-03-06 Thread Parth Brahmbhatt (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14351179#comment-14351179
 ] 

Parth Brahmbhatt commented on KAFKA-1660:
-

No and I wont be able to do it for probably next couple of weeks. If this is a 
pressing issue and someone else wants to take over the jira, please feel free 
to do so.

 Ability to call close() with a timeout on the Java Kafka Producer. 
 ---

 Key: KAFKA-1660
 URL: https://issues.apache.org/jira/browse/KAFKA-1660
 Project: Kafka
  Issue Type: Improvement
  Components: clients, producer 
Affects Versions: 0.8.2.0
Reporter: Andrew Stein
Assignee: Parth Brahmbhatt
 Fix For: 0.8.3

 Attachments: KAFKA-1660.patch, KAFKA-1660_2015-02-17_16:41:19.patch, 
 KAFKA-1660_2015-03-02_10:41:49.patch


 I would like the ability to call {{close}} with a timeout on the Java 
 Client's KafkaProducer.
 h6. Workaround
 Currently, it is possible to ensure that {{close}} will return quickly by 
 first doing a {{future.get(timeout)}} on the last future produced on each 
 partition, but this means that the user has to define the partitions up front 
 at the time of {{send}} and track the returned {{future}}'s



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-527) Compression support does numerous byte copies

2015-03-06 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14351177#comment-14351177
 ] 

Guozhang Wang commented on KAFKA-527:
-

Thanks for the patch, this is very promising.

There are a couple of issues we want to resolve here:

1. ByteArrayOutputStream copies data upon overflowing and resizing.

2. Compressed stream needs one extra copy upon finishing reading / writing.

This patch is mainly aimed at #1 above, and I have uploaded a patch for 
optimizing decompressed iterator, just as an example for resolving #2. In 
addition, I think in the end we will deprecate ByeBufferMessageSet and move to 
o.a.k.c.r.MemoryRecords, which will resolve both points above. We can discuss 
whether we want to incorporate these patches into ByeBufferMessageSet now or 
just wait for the migration and improve on o.a.k.c.r.MemoryRecords. 

For example, today MemoryRecords's write pattern is only for appending messages 
with pre-defined records batch size, and try to close the batch when its size 
is approached; in ByteBufferMessageSet.create() we are given a set of messages 
without a predicated batch size, but it is still possible to get the value from 
the estimated compression ratio as we do in Compressor, such that in the worst 
case only one or two buffer expansions (i.e. data copies) are needed. Just is 
just an alternative to the linked-list buffers as proposed in this patch.

 Compression support does numerous byte copies
 -

 Key: KAFKA-527
 URL: https://issues.apache.org/jira/browse/KAFKA-527
 Project: Kafka
  Issue Type: Bug
  Components: compression
Reporter: Jay Kreps
Assignee: Yasuhiro Matsuda
Priority: Critical
 Attachments: KAFKA-527.message-copy.history, KAFKA-527.patch, 
 java.hprof.no-compression.txt, java.hprof.snappy.text


 The data path for compressing or decompressing messages is extremely 
 inefficient. We do something like 7 (?) complete copies of the data, often 
 for simple things like adding a 4 byte size to the front. I am not sure how 
 this went by unnoticed.
 This is likely the root cause of the performance issues we saw in doing bulk 
 recompression of data in mirror maker.
 The mismatch between the InputStream and OutputStream interfaces and the 
 Message/MessageSet interfaces which are based on byte buffers is the cause of 
 many of these.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-527) Compression support does numerous byte copies

2015-03-06 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14351180#comment-14351180
 ] 

Guozhang Wang commented on KAFKA-527:
-

Created reviewboard https://reviews.apache.org/r/31816/diff/
against branch origin/trunk

 Compression support does numerous byte copies
 -

 Key: KAFKA-527
 URL: https://issues.apache.org/jira/browse/KAFKA-527
 Project: Kafka
  Issue Type: Bug
  Components: compression
Reporter: Jay Kreps
Assignee: Yasuhiro Matsuda
Priority: Critical
 Attachments: KAFKA-527.message-copy.history, KAFKA-527.patch, 
 java.hprof.no-compression.txt, java.hprof.snappy.text


 The data path for compressing or decompressing messages is extremely 
 inefficient. We do something like 7 (?) complete copies of the data, often 
 for simple things like adding a 4 byte size to the front. I am not sure how 
 this went by unnoticed.
 This is likely the root cause of the performance issues we saw in doing bulk 
 recompression of data in mirror maker.
 The mismatch between the InputStream and OutputStream interfaces and the 
 Message/MessageSet interfaces which are based on byte buffers is the cause of 
 many of these.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2003) Add upgrade tests

2015-03-06 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14351199#comment-14351199
 ] 

Gwen Shapira commented on KAFKA-2003:
-

I think this one is a dupe of KAFKA-1888, but that KAFKA-1898 is different. 
(I.e full wire and API compatibility is not the same as an upgrade).

I'll let [~singhashish] and [~anigam] figure out who's doing what and which 
JIRA we are keeping :)
IMO, there's more then enough work for two people here.

 Add upgrade tests
 -

 Key: KAFKA-2003
 URL: https://issues.apache.org/jira/browse/KAFKA-2003
 Project: Kafka
  Issue Type: Improvement
Reporter: Gwen Shapira
Assignee: Ashish K Singh

 To test protocol changes, compatibility and upgrade process, we need a good 
 way to test different versions of the product together and to test end-to-end 
 upgrade process.
 For example, for 0.8.2 to 0.8.3 test we want to check:
 * Can we start a cluster with a mix of 0.8.2 and 0.8.3 brokers?
 * Can a cluster of 0.8.3 brokers bump the protocol level one broker at a time?
 * Can 0.8.2 clients run against a cluster of 0.8.3 brokers?
 There are probably more questions. But an automated framework that can test 
 those and report results will be a good start.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-1660) Ability to call close() with a timeout on the Java Kafka Producer.

2015-03-06 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin reassigned KAFKA-1660:
---

Assignee: Jiangjie Qin  (was: Parth Brahmbhatt)

 Ability to call close() with a timeout on the Java Kafka Producer. 
 ---

 Key: KAFKA-1660
 URL: https://issues.apache.org/jira/browse/KAFKA-1660
 Project: Kafka
  Issue Type: Improvement
  Components: clients, producer 
Affects Versions: 0.8.2.0
Reporter: Andrew Stein
Assignee: Jiangjie Qin
 Fix For: 0.8.3

 Attachments: KAFKA-1660.patch, KAFKA-1660_2015-02-17_16:41:19.patch, 
 KAFKA-1660_2015-03-02_10:41:49.patch


 I would like the ability to call {{close}} with a timeout on the Java 
 Client's KafkaProducer.
 h6. Workaround
 Currently, it is possible to ensure that {{close}} will return quickly by 
 first doing a {{future.get(timeout)}} on the last future produced on each 
 partition, but this means that the user has to define the partitions up front 
 at the time of {{send}} and track the returned {{future}}'s



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2007) update offsetrequest for more useful information we have on broker about partition

2015-03-06 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-2007:
-
Description: 
this will need a KIP

via [~jkreps] in KIP-6 discussion about KAFKA-1694

The other information that would be really useful to get would be
information about partitions--how much data is in the partition, what are
the segment offsets, what is the log-end offset (i.e. last offset), what is
the compaction point, etc. I think that done right this would be the
successor to the very awkward OffsetRequest we have today.

This is not really blocking that ticket and could happen before/after and has a 
lot of other useful purposes and is important to get done so tracking it here 
in this JIRA.

  was:
this will need a KIP

via [~jkreps] in KIP-6 discussion about KAFKA-

The other information that would be really useful to get would be
information about partitions--how much data is in the partition, what are
the segment offsets, what is the log-end offset (i.e. last offset), what is
the compaction point, etc. I think that done right this would be the
successor to the very awkward OffsetRequest we have today.

This is not really blocking that ticket and could happen before/after and has a 
lot of other useful purposes and is important to get done so tracking it here 
in this JIRA.


 update offsetrequest for more useful information we have on broker about 
 partition
 --

 Key: KAFKA-2007
 URL: https://issues.apache.org/jira/browse/KAFKA-2007
 Project: Kafka
  Issue Type: Bug
Reporter: Joe Stein
 Fix For: 0.8.3


 this will need a KIP
 via [~jkreps] in KIP-6 discussion about KAFKA-1694
 The other information that would be really useful to get would be
 information about partitions--how much data is in the partition, what are
 the segment offsets, what is the log-end offset (i.e. last offset), what is
 the compaction point, etc. I think that done right this would be the
 successor to the very awkward OffsetRequest we have today.
 This is not really blocking that ticket and could happen before/after and has 
 a lot of other useful purposes and is important to get done so tracking it 
 here in this JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2006) switch the broker server over to the new java protocol definitions

2015-03-06 Thread Joe Stein (JIRA)
Joe Stein created KAFKA-2006:


 Summary: switch the broker server over to the new java protocol 
definitions
 Key: KAFKA-2006
 URL: https://issues.apache.org/jira/browse/KAFKA-2006
 Project: Kafka
  Issue Type: Bug
Reporter: Joe Stein
Assignee: Andrii Biletskyi
Priority: Blocker
 Fix For: 0.8.3






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2007) update offsetrequest for more useful information we have on broker about partition

2015-03-06 Thread Joe Stein (JIRA)
Joe Stein created KAFKA-2007:


 Summary: update offsetrequest for more useful information we have 
on broker about partition
 Key: KAFKA-2007
 URL: https://issues.apache.org/jira/browse/KAFKA-2007
 Project: Kafka
  Issue Type: Bug
Reporter: Joe Stein
 Fix For: 0.8.3


this will need a KIP

via [~jkreps] in KIP-6 discussion about KAFKA-

The other information that would be really useful to get would be
information about partitions--how much data is in the partition, what are
the segment offsets, what is the log-end offset (i.e. last offset), what is
the compaction point, etc. I think that done right this would be the
successor to the very awkward OffsetRequest we have today.

This is not really blocking that ticket and could happen before/after and has a 
lot of other useful purposes and is important to get done so tracking it here 
in this JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-1646) Improve consumer read performance for Windows

2015-03-06 Thread Honghai Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350126#comment-14350126
 ] 

Honghai Chen edited comment on KAFKA-1646 at 3/6/15 9:01 AM:
-

Updated reviewboard  against branch origin/0.8.1
Hi,  [~jkreps] [~junrao] [~jghoman]] please check the review at
https://reviews.apache.org/r/29091/diff/7/  , appreciate.


was (Author: waldenchen):
Updated reviewboard  against branch origin/0.8.1
Please check the review athttps://reviews.apache.org/r/29091/diff/7/  
[~jkreps]

 Improve consumer read performance for Windows
 -

 Key: KAFKA-1646
 URL: https://issues.apache.org/jira/browse/KAFKA-1646
 Project: Kafka
  Issue Type: Improvement
  Components: log
Affects Versions: 0.8.1.1
 Environment: Windows
Reporter: xueqiang wang
Assignee: xueqiang wang
  Labels: newbie, patch
 Attachments: Improve consumer read performance for Windows.patch, 
 KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
 KAFKA-1646_20141216_163008.patch, KAFKA-1646_20150306_005526.patch


 This patch is for Window platform only. In Windows platform, if there are 
 more than one replicas writing to disk, the segment log files will not be 
 consistent in disk and then consumer reading performance will be dropped down 
 greatly. This fix allocates more disk spaces when rolling a new segment, and 
 then it will improve the consumer reading performance in NTFS file system.
 This patch doesn't affect file allocation of other filesystems, for it only 
 adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-11- Authorization design for kafka security

2015-03-06 Thread Harsha
Hi Parth,
Thanks for putting this together. Overall it looks good to
me. Although AdminUtils is a concern KIP-4  can probably fix
that part. 
Thanks,
Harsha

On Thu, Mar 5, 2015, at 10:39 AM, Parth Brahmbhatt wrote:
 Forgot to add links to wiki and jira.
 
 Link to wiki:
 https://cwiki.apache.org/confluence/display/KAFKA/KIP-11+-+Authorization+Interface
 Link to Jira: https://issues.apache.org/jira/browse/KAFKA-1688
 
 Thanks
 Parth
 
 From: Parth Brahmbhatt
 pbrahmbh...@hortonworks.commailto:pbrahmbh...@hortonworks.com
 Date: Thursday, March 5, 2015 at 10:33 AM
 To: dev@kafka.apache.orgmailto:dev@kafka.apache.org
 dev@kafka.apache.orgmailto:dev@kafka.apache.org
 Subject: [DISCUSS] KIP-11- Authorization design for kafka security
 
 Hi,
 
 KIP-11 is open for discussion , I have updated the wiki with the design
 and open questions.
 
 Thanks
 Parth


[jira] [Comment Edited] (KAFKA-1646) Improve consumer read performance for Windows

2015-03-06 Thread Honghai Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350126#comment-14350126
 ] 

Honghai Chen edited comment on KAFKA-1646 at 3/6/15 8:59 AM:
-

Updated reviewboard  against branch origin/0.8.1
Please check the review athttps://reviews.apache.org/r/29091/diff/7/  
[~jkreps]


was (Author: waldenchen):
Updated reviewboard  against branch origin/0.8.1

 Improve consumer read performance for Windows
 -

 Key: KAFKA-1646
 URL: https://issues.apache.org/jira/browse/KAFKA-1646
 Project: Kafka
  Issue Type: Improvement
  Components: log
Affects Versions: 0.8.1.1
 Environment: Windows
Reporter: xueqiang wang
Assignee: xueqiang wang
  Labels: newbie, patch
 Attachments: Improve consumer read performance for Windows.patch, 
 KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
 KAFKA-1646_20141216_163008.patch, KAFKA-1646_20150306_005526.patch


 This patch is for Window platform only. In Windows platform, if there are 
 more than one replicas writing to disk, the segment log files will not be 
 consistent in disk and then consumer reading performance will be dropped down 
 greatly. This fix allocates more disk spaces when rolling a new segment, and 
 then it will improve the consumer reading performance in NTFS file system.
 This patch doesn't affect file allocation of other filesystems, for it only 
 adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 29091: Improve 1646 fix by truncate extra space when clean shutdown

2015-03-06 Thread Qianlin Xia

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29091/
---

(Updated March 6, 2015, 9:11 a.m.)


Review request for kafka.


Changes
---

Correct the titile


Bugs: KAFKA-1646
https://issues.apache.org/jira/browse/KAFKA-1646


Repository: kafka


Description (updated)
---

Kafka 1646 fix


Diffs
-

  core/src/main/scala/kafka/log/FileMessageSet.scala 
e1f8b979c3e6f62ea235bd47bc1587a1291443f9 
  core/src/main/scala/kafka/log/Log.scala 
46df8d99d977a3b010a9b9f4698187fa9bfb2498 
  core/src/main/scala/kafka/log/LogManager.scala 
7cee5435b23fcd0d76f531004911a2ca499df4f8 
  core/src/main/scala/kafka/log/LogSegment.scala 
0d6926ea105a99c9ff2cfc9ea6440f2f2d37bde8 
  core/src/main/scala/kafka/utils/Utils.scala 
a89b0463685e6224d263bc9177075e1bb6b93d04 

Diff: https://reviews.apache.org/r/29091/diff/


Testing
---


Thanks,

Qianlin Xia



[jira] [Comment Edited] (KAFKA-1646) Improve consumer read performance for Windows

2015-03-06 Thread Honghai Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350126#comment-14350126
 ] 

Honghai Chen edited comment on KAFKA-1646 at 3/6/15 9:02 AM:
-

Updated reviewboard  against branch origin/0.8.1
Hi,  [~jkreps] [~junrao] [~jghoman] please check the review at
https://reviews.apache.org/r/29091/diff/7/  , appreciate.


was (Author: waldenchen):
Updated reviewboard  against branch origin/0.8.1
Hi,  [~jkreps] [~junrao] [~jghoman]] please check the review at
https://reviews.apache.org/r/29091/diff/7/  , appreciate.

 Improve consumer read performance for Windows
 -

 Key: KAFKA-1646
 URL: https://issues.apache.org/jira/browse/KAFKA-1646
 Project: Kafka
  Issue Type: Improvement
  Components: log
Affects Versions: 0.8.1.1
 Environment: Windows
Reporter: xueqiang wang
Assignee: xueqiang wang
  Labels: newbie, patch
 Attachments: Improve consumer read performance for Windows.patch, 
 KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
 KAFKA-1646_20141216_163008.patch, KAFKA-1646_20150306_005526.patch


 This patch is for Window platform only. In Windows platform, if there are 
 more than one replicas writing to disk, the segment log files will not be 
 consistent in disk and then consumer reading performance will be dropped down 
 greatly. This fix allocates more disk spaces when rolling a new segment, and 
 then it will improve the consumer reading performance in NTFS file system.
 This patch doesn't affect file allocation of other filesystems, for it only 
 adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 31806: Patch for KAFKA-1501

2015-03-06 Thread Ewen Cheslack-Postava

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31806/
---

Review request for kafka.


Bugs: KAFKA-1501
https://issues.apache.org/jira/browse/KAFKA-1501


Repository: kafka


Description
---

This removes the TestUtils.choosePorts and TestZKUtils utilities because the
ports they claim to allocate can't actually be guaranteed to work. Instead, we
allow the port to be 0 to make the kernel give us a random port. This is only
useful in tests, but ensures we'll always be able to bind a socket as long as
some ports are still available.

The impact on the main code is fairly minimal, but we also have to be careful
about using the advertisedPort setting since it defaults to the port setting,
which may no longer represent the actual port. To support this case and so tests
are able to discover the port the server was bound to, we now provide a
boundPort method on the server.

Most of the changes to the tests are straightforward adaptations. A few settings
that were previously just val fields must now be methods because their value
depends on the port value, which won't be known until setUp() starts the
servers. The biggest impact of this is that we cannot generate broker configs
during the test class initialization. Instead, KafkaServerTestHarness now
provides a hook that classes implement to create configs and a method that gets
them that is compatible with the old field version in order to keep code changes
to a minimum.

Fix testBrokerFailure test to better handle the changing broker addresses 
caused by bouncing the servers when using randomly allocated ports.


Temporarily disable testBrokerFailure test.


Diffs
-

  clients/src/test/java/org/apache/kafka/common/network/SelectorTest.java 
0d030bc9becfa7db5b27ecf1df03eb71074f92d3 
  clients/src/test/java/org/apache/kafka/test/TestUtils.java 
20dba7b9199273ca8952c4fea71efadc2f09f044 
  core/src/main/scala/kafka/network/SocketServer.scala 
76ce41aed6e04ac5ba88395c4d5008aca17f9a73 
  core/src/main/scala/kafka/server/KafkaServer.scala 
378a74d9e8e408e1e5d283badf3eded6333fadff 
  core/src/test/scala/integration/kafka/api/IntegrationTestHarness.scala 
82fe4c9a138617f7af99a54cca7176d6c80747d0 
  core/src/test/scala/integration/kafka/api/ProducerCompressionTest.scala 
cae72f4f87f10b843c29dc731c6e0028b1d50734 
  core/src/test/scala/integration/kafka/api/ProducerFailureHandlingTest.scala 
8246e1281097e33eb8fadb291dc5feefdb631515 
  core/src/test/scala/integration/kafka/api/ProducerSendTest.scala 
3df450784592b894008e7507b2737f9bb07f7bd2 
  core/src/test/scala/unit/kafka/admin/AddPartitionsTest.scala 
8bc178505e00932a316c46ed1d904bd57b5b3f75 
  core/src/test/scala/unit/kafka/admin/AdminTest.scala 
ee0b21e6a94ad79c11dd08f6e5adf98c333e2ec9 
  core/src/test/scala/unit/kafka/admin/DeleteConsumerGroupTest.scala 
1baff0ea9826495c85c28763deeed78d052728fa 
  core/src/test/scala/unit/kafka/admin/DeleteTopicTest.scala 
6258983451b0ff5dfdc5e79be47c90a17525e284 
  core/src/test/scala/unit/kafka/consumer/ConsumerIteratorTest.scala 
995397ba2e2dfc6fadd9d5c5efd90f2c4ac0d59c 
  core/src/test/scala/unit/kafka/consumer/ZookeeperConsumerConnectorTest.scala 
19640cc55b5baa0a26a808d708b7f4caf491c9f0 
  core/src/test/scala/unit/kafka/integration/AutoOffsetResetTest.scala 
ffa6c306a44311296fb182a61529e5168f0a84c4 
  core/src/test/scala/unit/kafka/integration/FetcherTest.scala 
3093e459935ecf8e5b34fca34a422674562a7034 
  core/src/test/scala/unit/kafka/integration/KafkaServerTestHarness.scala 
dc0512b526e914df7e7581b27df18f498da428e2 
  core/src/test/scala/unit/kafka/integration/PrimitiveApiTest.scala 
30deaf47b64592f2e1cc84a4156671fac11b67ef 
  core/src/test/scala/unit/kafka/integration/ProducerConsumerTestHarness.scala 
108c2e7f47ede038855e7fa3c3df582d86e8c5c3 
  core/src/test/scala/unit/kafka/integration/RollingBounceTest.scala 
4d27e41c727e73544b2b214a0a0b60f6acdbfd17 
  core/src/test/scala/unit/kafka/integration/TopicMetadataTest.scala 
a671af4a87d5c2fb42ff48c553bca7cae6538231 
  core/src/test/scala/unit/kafka/integration/UncleanLeaderElectionTest.scala 
8342cae564ebc39fe74a512343a4523072ca205a 
  
core/src/test/scala/unit/kafka/javaapi/consumer/ZookeeperConsumerConnectorTest.scala
 3d0fc9deda2d3a39f2618a5be3edd98cd935ffbb 
  core/src/test/scala/unit/kafka/log/LogTest.scala 
8cd5f2fa4a1a536c3983c5b6eac3d80de49d5a94 
  core/src/test/scala/unit/kafka/log4j/KafkaLog4jAppenderTest.scala 
36db9172ea2d4d7e242e023ba914596c1f64f5f4 
  core/src/test/scala/unit/kafka/metrics/MetricsTest.scala 
0f58ad8e698e3c0ec76c510bd5f76912a992209c 
  core/src/test/scala/unit/kafka/network/SocketServerTest.scala 
0af23abf146d99e3d6cf31e5d6b95a9e63318ddb 
  core/src/test/scala/unit/kafka/producer/AsyncProducerTest.scala 
be90c5bc7f1f5ba8a237d1c7176f27029727c918 
  core/src/test/scala/unit/kafka/producer/ProducerTest.scala 

Re: [DISCUSS] KIP-6 - New reassignment partition logic for re-balancing

2015-03-06 Thread Guozhang Wang
I am +1 Todd's suggestion, the default reassignment scheme is only used
when a reassignment command is issued with no scheme specified, and
changing this default scheme should not automatically trigger a
reassignment of all existing topics: it will only take effect when the next
reassignment command with no specific scheme is issued.

On Thu, Mar 5, 2015 at 10:16 AM, Todd Palino tpal...@gmail.com wrote:

 I would not think that partitions moving would cause any orphaned messages
 like that. I would be more concerned about what happens when you change the
 default on a running cluster from one scheme to another. Would we want to
 support some kind of automated reassignment of existing partitions
 (personally - no. I want to trigger that manually because it is a very disk
 and network intensive process)?

 -Todd

 On Wed, Mar 4, 2015 at 7:33 PM, Tong Li liton...@us.ibm.com wrote:

 
 
  Todd,
  I think plugable design is good with solid default. The only issue I
  feel is when you use one and switch to another, will we end up with some
  unread messages hanging around and no one thinks or knows it is their
  responsibility to take care of them?
 
  Thanks.
 
  Tong
 
  Sent from my iPhone
 
   On Mar 5, 2015, at 10:46 AM, Todd Palino tpal...@gmail.com wrote:
  
   Apologize for the late comment on this...
  
   So fair assignment by count (taking into account the current partition
   count of each broker) is very good. However, it's worth noting that all
   partitions are not created equal. We have actually been performing more
   rebalance work based on the partition size on disk, as given equal
   retention of all topics, the size on disk is a better indicator of the
   amount of traffic a partition gets, both in terms of storage and
 network
   traffic. Overall, this seems to be a better balance.
  
   In addition to this, I think there is very much a need to have Kafka be
   rack-aware. That is, to be able to assure that for a given cluster, you
   never assign all replicas for a given partition in the same rack. This
   would allow us to guard against maintenances or power failures that
  affect
   a full rack of systems (or a given switch).
  
   I think it would make sense to implement the reassignment logic as a
   pluggable component. That way it would be easy to select a scheme when
   performing a reassignment (count, size, rack aware). Configuring a
  default
   scheme for a cluster would allow for the brokers to create new topics
 and
   partitions in compliance with the requested policy.
  
   -Todd
  
  
   On Thu, Jan 22, 2015 at 10:13 PM, Joe Stein joe.st...@stealth.ly
  wrote:
  
I will go back through the ticket and code and write more up. Should
 be
able to-do that sometime next week. The intention was to not replace
existing functionality by issue a WARN on use. The following version
 it
  is
released we could then deprecate it... I will fix the KIP for that
 too.
   
On Fri, Jan 23, 2015 at 12:34 AM, Neha Narkhede n...@confluent.io
  wrote:
   
 Hey Joe,

 1. Could you add details to the Public Interface section of the
 KIP?
  This
 should include the proposed changes to the partition reassignment
  tool.
 Also, maybe the new option can be named --rebalance instead of
 --re-balance?
 2. It makes sense to list --decommission-broker as part of this
 KIP.
 Similarly, shouldn't we also have an --add-broker option? The way I
  see
 this is that there are several events when a partition reassignment
  is
 required. Before this functionality is automated on the broker, the
  tool
 will generate an ideal replica placement for each such event. The
  users
 should merely have to specify the nature of the event e.g. adding a
broker
 or decommissioning an existing broker or merely rebalancing.
 3. If I understand the KIP correctly, the upgrade plan for this
  feature
 includes removing the existing --generate option on the partition
 reassignment tool in 0.8.3 while adding all the new options in the
  same
 release. Is that correct?

 Thanks,
 Neha

 On Thu, Jan 22, 2015 at 9:23 PM, Jay Kreps jay.kr...@gmail.com
  wrote:

  Ditto on this one. Can you give the algorithm we want to
 implement?
 
  Also I think in terms of scope this is just proposing to change
 the
logic
  in ReassignPartitionsCommand? I think we've had the discussion
  various
  times on the mailing list that what people really want is just
 for
Kafka
 to
  do it's best to balance data in an online fashion (for some
  definition
of
  balance). i.e. if you add a new node partitions would slowly
  migrate to
 it,
  and if a node dies, partitions slowly migrate off it. This could
  potentially be more work, but I'm not sure how much more. Has
  anyone
  thought about how to do it?
 
  -Jay
 
  On Wed, Jan 21, 2015 at 10:11 PM, Joe Stein 
 

[jira] [Updated] (KAFKA-1501) transient unit tests failures due to port already in use

2015-03-06 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-1501:
-
Attachment: KAFKA-1501.patch

 transient unit tests failures due to port already in use
 

 Key: KAFKA-1501
 URL: https://issues.apache.org/jira/browse/KAFKA-1501
 Project: Kafka
  Issue Type: Improvement
  Components: core
Reporter: Jun Rao
Assignee: Guozhang Wang
  Labels: newbie
 Attachments: KAFKA-1501-choosePorts.patch, KAFKA-1501.patch, 
 KAFKA-1501.patch, KAFKA-1501.patch, KAFKA-1501.patch, test-100.out, 
 test-100.out, test-27.out, test-29.out, test-32.out, test-35.out, 
 test-38.out, test-4.out, test-42.out, test-45.out, test-46.out, test-51.out, 
 test-55.out, test-58.out, test-59.out, test-60.out, test-69.out, test-72.out, 
 test-74.out, test-76.out, test-84.out, test-87.out, test-91.out, test-92.out


 Saw the following transient failures.
 kafka.api.ProducerFailureHandlingTest  testTooLargeRecordWithAckOne FAILED
 kafka.common.KafkaException: Socket server failed to bind to 
 localhost:59909: Address already in use.
 at kafka.network.Acceptor.openServerSocket(SocketServer.scala:195)
 at kafka.network.Acceptor.init(SocketServer.scala:141)
 at kafka.network.SocketServer.startup(SocketServer.scala:68)
 at kafka.server.KafkaServer.startup(KafkaServer.scala:95)
 at kafka.utils.TestUtils$.createServer(TestUtils.scala:123)
 at 
 kafka.api.ProducerFailureHandlingTest.setUp(ProducerFailureHandlingTest.scala:68)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1501) transient unit tests failures due to port already in use

2015-03-06 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350725#comment-14350725
 ] 

Ewen Cheslack-Postava commented on KAFKA-1501:
--

Uploaded a wip patch. It gets rid of choosePorts entirely and makes the tests 
work using random ports instead for both ZK and Kafka. A couple of notes:

1. One change this necessitated is that a bunch of things that used to just be 
initialized during test class construction now have to be dynamic since you 
can't generate the Kafka configs until you know the ZK port. This has two 
impacts. First, KafkaServerTestHarness subclasses now have to override a 
generateConfigs() class rather than just overriding the configs field. Second, 
the minimal patch to make this work maintains the ability to access some data 
(info about zk, the list of configs) like fields (no ()), but I think this 
might just be misleading or confusing to people writing tests -- something like 
getConfigs() might make it clearer that it will only be valid while a test is 
running.
2. A few tests were specifying ports directly instead of using choosePorts. I 
think I found them all, but it'd be good to have a couple more eyes looking for 
them.
3. Tests that bounce brokers became more difficult because the port changes 
when you restart. In most cases you this isn't a problem, you just need to make 
sure you instantiate producers/consumers at the right time. However, one test 
(ProducerFailureHandlingTest.testBrokerFailure) revealed an underlying issue. 
There are conditions where you can bounce the brokers too quickly and because 
of the way the new producer gets metadata, it can get stuck with old metadata 
and none of the brokers are listening on the ports it has. I included a patch 
which in theory should address the problem, but the producer is also having an 
issue where sometimes connection requests take a long time to finish, and 
during that time the brokers all bounce, leaving the producer with no useful 
addresses in its copy of the metadata. In practice you would never bounce your 
servers to new addresses that quickly, so this is purely an artifact of having 
to use random ports during tests. If anyone has suggestions for how to handle 
this, I'm all ears. In order to allow testing the rest of the patch, I 
commented out that test for the time being.

I wanted to get this up so we can discuss these issues, but also so [~guozhang] 
can test this to verify the approach will work before I spend much more time on 
it. I tested a few times with 5 copies of the tests running concurrently.

 transient unit tests failures due to port already in use
 

 Key: KAFKA-1501
 URL: https://issues.apache.org/jira/browse/KAFKA-1501
 Project: Kafka
  Issue Type: Improvement
  Components: core
Reporter: Jun Rao
Assignee: Guozhang Wang
  Labels: newbie
 Attachments: KAFKA-1501-choosePorts.patch, KAFKA-1501.patch, 
 KAFKA-1501.patch, KAFKA-1501.patch, KAFKA-1501.patch, test-100.out, 
 test-100.out, test-27.out, test-29.out, test-32.out, test-35.out, 
 test-38.out, test-4.out, test-42.out, test-45.out, test-46.out, test-51.out, 
 test-55.out, test-58.out, test-59.out, test-60.out, test-69.out, test-72.out, 
 test-74.out, test-76.out, test-84.out, test-87.out, test-91.out, test-92.out


 Saw the following transient failures.
 kafka.api.ProducerFailureHandlingTest  testTooLargeRecordWithAckOne FAILED
 kafka.common.KafkaException: Socket server failed to bind to 
 localhost:59909: Address already in use.
 at kafka.network.Acceptor.openServerSocket(SocketServer.scala:195)
 at kafka.network.Acceptor.init(SocketServer.scala:141)
 at kafka.network.SocketServer.startup(SocketServer.scala:68)
 at kafka.server.KafkaServer.startup(KafkaServer.scala:95)
 at kafka.utils.TestUtils$.createServer(TestUtils.scala:123)
 at 
 kafka.api.ProducerFailureHandlingTest.setUp(ProducerFailureHandlingTest.scala:68)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1501) transient unit tests failures due to port already in use

2015-03-06 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350706#comment-14350706
 ] 

Ewen Cheslack-Postava commented on KAFKA-1501:
--

Created reviewboard https://reviews.apache.org/r/31806/diff/
 against branch origin/trunk

 transient unit tests failures due to port already in use
 

 Key: KAFKA-1501
 URL: https://issues.apache.org/jira/browse/KAFKA-1501
 Project: Kafka
  Issue Type: Improvement
  Components: core
Reporter: Jun Rao
Assignee: Guozhang Wang
  Labels: newbie
 Attachments: KAFKA-1501-choosePorts.patch, KAFKA-1501.patch, 
 KAFKA-1501.patch, KAFKA-1501.patch, KAFKA-1501.patch, test-100.out, 
 test-100.out, test-27.out, test-29.out, test-32.out, test-35.out, 
 test-38.out, test-4.out, test-42.out, test-45.out, test-46.out, test-51.out, 
 test-55.out, test-58.out, test-59.out, test-60.out, test-69.out, test-72.out, 
 test-74.out, test-76.out, test-84.out, test-87.out, test-91.out, test-92.out


 Saw the following transient failures.
 kafka.api.ProducerFailureHandlingTest  testTooLargeRecordWithAckOne FAILED
 kafka.common.KafkaException: Socket server failed to bind to 
 localhost:59909: Address already in use.
 at kafka.network.Acceptor.openServerSocket(SocketServer.scala:195)
 at kafka.network.Acceptor.init(SocketServer.scala:141)
 at kafka.network.SocketServer.startup(SocketServer.scala:68)
 at kafka.server.KafkaServer.startup(KafkaServer.scala:95)
 at kafka.utils.TestUtils$.createServer(TestUtils.scala:123)
 at 
 kafka.api.ProducerFailureHandlingTest.setUp(ProducerFailureHandlingTest.scala:68)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 31369: Patch for KAFKA-1982

2015-03-06 Thread Gwen Shapira

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31369/#review75529
---

Ship it!


Thats a really sweet producer example :)

LGTM.

- Gwen Shapira


On March 4, 2015, 1:51 a.m., Ashish Singh wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/31369/
 ---
 
 (Updated March 4, 2015, 1:51 a.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1982
 https://issues.apache.org/jira/browse/KAFKA-1982
 
 
 Repository: kafka
 
 
 Description
 ---
 
 KAFKA-1982: change kafka.examples.Producer to use the new java producer
 
 
 Diffs
 -
 
   
 clients/src/main/java/org/apache/kafka/common/serialization/IntegerDeserializer.java
  PRE-CREATION 
   
 clients/src/main/java/org/apache/kafka/common/serialization/IntegerSerializer.java
  PRE-CREATION 
   
 clients/src/test/java/org/apache/kafka/common/serialization/SerializationTest.java
  f5cd61c1aa9433524da0b83826a766389de68a0b 
   examples/README 53db6969b2e2d49e23ab13283b9146848e37434e 
   examples/src/main/java/kafka/examples/Consumer.java 
 13135b954f3078eeb7394822b0db25470b746f03 
   examples/src/main/java/kafka/examples/KafkaConsumerProducerDemo.java 
 1239394190fe557e025fbd8f3803334402b0aeea 
   examples/src/main/java/kafka/examples/Producer.java 
 96e98933148d07564c1b30ba8e805e2433c2adc8 
   examples/src/main/java/kafka/examples/SimpleConsumerDemo.java 
 0d66fe5f8819194c8624aed4a21105733c20cc8e 
 
 Diff: https://reviews.apache.org/r/31369/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Ashish Singh
 




Re: Review Request 31650: Drag Coordinator and FetchManager out of KafkaConsumer, fix a bunch of consumer test issues

2015-03-06 Thread Guozhang Wang


 On March 5, 2015, 10:42 p.m., Onur Karaman wrote:
  clients/src/main/java/org/apache/kafka/clients/consumer/internals/Coordinator.java,
   lines 137-138
  https://reviews.apache.org/r/31650/diff/1/?file=882439#file882439line137
 
  This is really minor, but are longs necessary for these time parameters?
  
  Integer.MAX_VALUE translates to a little over 24 days.

These two configs are defined in the common client configs that are used by 
producers also. I think it would be ok to be more conversative on these config 
values.


 On March 5, 2015, 10:42 p.m., Onur Karaman wrote:
  clients/src/main/java/org/apache/kafka/clients/consumer/internals/Coordinator.java,
   line 183
  https://reviews.apache.org/r/31650/diff/1/?file=882439#file882439line183
 
  This is marking the receivedResponse as the time the request was sent 
  rather than the time we received the response.

Actually we do not need a last heart beat response as consumer client does not 
check for time out expiration at all.


 On March 5, 2015, 10:42 p.m., Onur Karaman wrote:
  clients/src/main/java/org/apache/kafka/clients/consumer/internals/Coordinator.java,
   lines 218-234
  https://reviews.apache.org/r/31650/diff/1/?file=882439#file882439line218
 
  I think this is simpler as:
  ```java
  boolean done = false;
  while (!done) {
  }
  ```

Actually this is not simpler with this pattern as you need to initialize inside 
the loop as true determine whether to override it to false, not vice versa, 
right?


 On March 5, 2015, 10:42 p.m., Onur Karaman wrote:
  clients/src/main/java/org/apache/kafka/common/protocol/Errors.java, lines 
  71-72
  https://reviews.apache.org/r/31650/diff/1/?file=882446#file882446line71
 
  Using the term consumer implies that generation ids are associated 
  with a consumer, while they're really associated with a group.
  
  Maybe just call this ILLEGAL_GENERATION as stated in the wiki?
  
  
  https://cwiki.apache.org/confluence/display/KAFKA/Kafka+0.9+Consumer+Rewrite+Design#Kafka0.9ConsumerRewriteDesign-Groupmanagementprotocol

Good point.


- Guozhang


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31650/#review75355
---


On March 5, 2015, 10:57 p.m., Guozhang Wang wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/31650/
 ---
 
 (Updated March 5, 2015, 10:57 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1910
 https://issues.apache.org/jira/browse/KAFKA-1910
 
 
 Repository: kafka
 
 
 Description
 ---
 
 See comments in KAFKA-1910;
 
 Updated RB includes unit test for Coordinator / FetchManager / Heartbeat and 
 a couple changes on MemoryRecords and test utils.
 
 
 Diffs
 -
 
   clients/src/main/java/org/apache/kafka/clients/CommonClientConfigs.java 
 06fcfe62cc1fe76f58540221698ef076fe150e96 
   clients/src/main/java/org/apache/kafka/clients/KafkaClient.java 
 8a3e55aaff7d8c26e56a8407166a4176c1da2644 
   clients/src/main/java/org/apache/kafka/clients/NetworkClient.java 
 a7fa4a9dfbcfbc4d9e9259630253cbcded158064 
   clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java 
 5fb21001abd77cac839bd724afa04e377a3e82aa 
   clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
 67ceb754a52c07143c69b053fe128b9e24060b99 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/internals/Coordinator.java
  PRE-CREATION 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/internals/FetchManager.java
  PRE-CREATION 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/internals/Heartbeat.java
  ee0751e4949120d114202c2299d49612a89b9d97 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/internals/SubscriptionState.java
  d41d3068c11d4b5c640467dc0ae1b7c20a8d128c 
   clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
 7397e565fd865214529ffccadd4222d835ac8110 
   clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
 122375c473bf73caf05299b9f5174c6b226ca863 
   
 clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java 
 ed9c63a6679e3aaf83d19fde19268553a4c107c2 
   clients/src/main/java/org/apache/kafka/common/network/Selector.java 
 6baad9366a1975dbaba1786da91efeaa38533319 
   clients/src/main/java/org/apache/kafka/common/protocol/Errors.java 
 ad2171f5417c93194f5f234bdc7fdd0b8d59a8a8 
   clients/src/main/java/org/apache/kafka/common/record/MemoryRecords.java 
 083e7a39249ab56a73a014b106876244d619f189 
   clients/src/main/java/org/apache/kafka/common/requests/FetchResponse.java 
 e67c4c8332cb1dd3d9cde5de687df7760045dfe6 
   
 

Re: Review Request 31650: Drag Coordinator and FetchManager out of KafkaConsumer, fix a bunch of consumer test issues

2015-03-06 Thread Guozhang Wang


 On March 5, 2015, 11:39 p.m., Onur Karaman wrote:
  clients/src/test/java/org/apache/kafka/clients/consumer/internals/CoordinatorTest.java,
   line 83
  https://reviews.apache.org/r/31650/diff/2/?file=886350#file886350line83
 
  I think these scenarios should be split up into separate tests.

The general law of defining unit test cases is by functionality instead of 
scenarios, hence I think it is OK to group them in one test.


- Guozhang


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31650/#review75406
---


On March 5, 2015, 10:57 p.m., Guozhang Wang wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/31650/
 ---
 
 (Updated March 5, 2015, 10:57 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1910
 https://issues.apache.org/jira/browse/KAFKA-1910
 
 
 Repository: kafka
 
 
 Description
 ---
 
 See comments in KAFKA-1910;
 
 Updated RB includes unit test for Coordinator / FetchManager / Heartbeat and 
 a couple changes on MemoryRecords and test utils.
 
 
 Diffs
 -
 
   clients/src/main/java/org/apache/kafka/clients/CommonClientConfigs.java 
 06fcfe62cc1fe76f58540221698ef076fe150e96 
   clients/src/main/java/org/apache/kafka/clients/KafkaClient.java 
 8a3e55aaff7d8c26e56a8407166a4176c1da2644 
   clients/src/main/java/org/apache/kafka/clients/NetworkClient.java 
 a7fa4a9dfbcfbc4d9e9259630253cbcded158064 
   clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java 
 5fb21001abd77cac839bd724afa04e377a3e82aa 
   clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
 67ceb754a52c07143c69b053fe128b9e24060b99 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/internals/Coordinator.java
  PRE-CREATION 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/internals/FetchManager.java
  PRE-CREATION 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/internals/Heartbeat.java
  ee0751e4949120d114202c2299d49612a89b9d97 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/internals/SubscriptionState.java
  d41d3068c11d4b5c640467dc0ae1b7c20a8d128c 
   clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
 7397e565fd865214529ffccadd4222d835ac8110 
   clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
 122375c473bf73caf05299b9f5174c6b226ca863 
   
 clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java 
 ed9c63a6679e3aaf83d19fde19268553a4c107c2 
   clients/src/main/java/org/apache/kafka/common/network/Selector.java 
 6baad9366a1975dbaba1786da91efeaa38533319 
   clients/src/main/java/org/apache/kafka/common/protocol/Errors.java 
 ad2171f5417c93194f5f234bdc7fdd0b8d59a8a8 
   clients/src/main/java/org/apache/kafka/common/record/MemoryRecords.java 
 083e7a39249ab56a73a014b106876244d619f189 
   clients/src/main/java/org/apache/kafka/common/requests/FetchResponse.java 
 e67c4c8332cb1dd3d9cde5de687df7760045dfe6 
   
 clients/src/main/java/org/apache/kafka/common/requests/HeartbeatResponse.java 
 0057496228feeeccbc0c009a42f5268fa2cb8611 
   
 clients/src/main/java/org/apache/kafka/common/requests/JoinGroupRequest.java 
 8c50e9be534c61ecf56106bf2b68cf678ea50d66 
   
 clients/src/main/java/org/apache/kafka/common/requests/JoinGroupResponse.java 
 52b1803d8b558c1eeb978ba8821496c7d3c20a6b 
   
 clients/src/main/java/org/apache/kafka/common/requests/ListOffsetResponse.java
  cfac47a4a05dc8a535595542d93e55237b7d1e93 
   
 clients/src/main/java/org/apache/kafka/common/requests/MetadataResponse.java 
 90f31413d7d80a06c0af359009cc271aa0c67be3 
   
 clients/src/main/java/org/apache/kafka/common/requests/OffsetCommitResponse.java
  4d3b9ececee4b4c0b50ba99da2ddbbb15f9cc08d 
   
 clients/src/main/java/org/apache/kafka/common/requests/OffsetFetchResponse.java
  edbed5880dc44fc178737a5e298c106a00f38443 
   clients/src/main/java/org/apache/kafka/common/requests/ProduceResponse.java 
 a00dcdf15d1c7bac7228be140647bd7d849deb9b 
   clients/src/test/java/org/apache/kafka/clients/MockClient.java 
 8f1a7a625e4eeafa44bbf9e5cff987de86c949be 
   
 clients/src/test/java/org/apache/kafka/clients/consumer/internals/CoordinatorTest.java
  PRE-CREATION 
   
 clients/src/test/java/org/apache/kafka/clients/consumer/internals/FetchManagerTest.java
  PRE-CREATION 
   
 clients/src/test/java/org/apache/kafka/clients/consumer/internals/HeartbeatTest.java
  PRE-CREATION 
   
 clients/src/test/java/org/apache/kafka/clients/consumer/internals/SubscriptionStateTest.java
  090087a319e2697d3a0653ca947d2cfa6d53f6c2 
   
 clients/src/test/java/org/apache/kafka/clients/producer/internals/RecordAccumulatorTest.java
  c1bc40648479d4c2ae4ac52f40dadc070a6bcf6f 
   
 

Re: Review Request 31591: Patch for KAFKA-1992

2015-03-06 Thread Gwen Shapira

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31591/
---

(Updated March 6, 2015, 9:34 p.m.)


Review request for kafka.


Bugs: KAFKA-1992
https://issues.apache.org/jira/browse/KAFKA-1992


Repository: kafka


Description (updated)
---

add logging per Jiangjie Qin comment


revert unintentional changes to log4j


merge with trunk


few small fixes suggested by Jun


Diffs (updated)
-

  core/src/main/scala/kafka/cluster/Partition.scala 
c4bf48a801007ebe7497077d2018d6dffe1677d4 
  core/src/main/scala/kafka/server/DelayedProduce.scala 
4d763bf05efb24a394662721292fc54d32467969 

Diff: https://reviews.apache.org/r/31591/diff/


Testing
---


Thanks,

Gwen Shapira



[jira] [Commented] (KAFKA-1992) Following KAFKA-1697, checkEnoughReplicasReachOffset doesn't need to get requiredAcks

2015-03-06 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350933#comment-14350933
 ] 

Gwen Shapira commented on KAFKA-1992:
-

Updated reviewboard https://reviews.apache.org/r/31591/diff/
 against branch trunk

 Following KAFKA-1697, checkEnoughReplicasReachOffset doesn't need to get 
 requiredAcks
 -

 Key: KAFKA-1992
 URL: https://issues.apache.org/jira/browse/KAFKA-1992
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira
Assignee: Gwen Shapira
 Attachments: KAFKA-1992.patch, KAFKA-1992_2015-03-03_14:16:34.patch, 
 KAFKA-1992_2015-03-03_17:17:43.patch, KAFKA-1992_2015-03-06_13:34:20.patch


 Follow up from Jun's review:
 Should we just remove requiredAcks completely since 
 checkEnoughReplicasReachOffset() will only be called when requiredAcks is -1?
 Answer is: Yes, we should :)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1992) Following KAFKA-1697, checkEnoughReplicasReachOffset doesn't need to get requiredAcks

2015-03-06 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350942#comment-14350942
 ] 

Gwen Shapira commented on KAFKA-1992:
-

Updated reviewboard https://reviews.apache.org/r/31591/diff/
 against branch trunk

 Following KAFKA-1697, checkEnoughReplicasReachOffset doesn't need to get 
 requiredAcks
 -

 Key: KAFKA-1992
 URL: https://issues.apache.org/jira/browse/KAFKA-1992
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira
Assignee: Gwen Shapira
 Attachments: KAFKA-1992.patch, KAFKA-1992_2015-03-03_14:16:34.patch, 
 KAFKA-1992_2015-03-03_17:17:43.patch, KAFKA-1992_2015-03-06_13:34:20.patch, 
 KAFKA-1992_2015-03-06_13:36:32.patch, KAFKA-1992_2015-03-06_13:37:39.patch


 Follow up from Jun's review:
 Should we just remove requiredAcks completely since 
 checkEnoughReplicasReachOffset() will only be called when requiredAcks is -1?
 Answer is: Yes, we should :)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2003) Add upgrade tests

2015-03-06 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350973#comment-14350973
 ] 

Gwen Shapira commented on KAFKA-2003:
-

Something like that :)

1. We may need two types of version: Git version (i.e. branch, tag or even 
commit hash) and numerical version (0.8.2.0, 0.8.3.0, etc) for the upgrade 
configuration.
2. Also, perhaps Dockers can be used to avoid compiling old versions (i.e. 
prepare docker images for releases and use those?)
3. We need to check the rolling upgrade itself. So something like:

* bring up n from_version brokers, do a sanity check with from_version 
clients
* bring down one broker and replace with to_version broker. check that 
everything is ok
* replace rest of brokers. check again.
* bring down one broker, bump version in config file and start it. check again.
* bump version for the rest. check again. 
continue with the client tests from here

Looking at this, maybe we need to start by developing something that can check 
that everything is ok :)

 Add upgrade tests
 -

 Key: KAFKA-2003
 URL: https://issues.apache.org/jira/browse/KAFKA-2003
 Project: Kafka
  Issue Type: Improvement
Reporter: Gwen Shapira
Assignee: Ashish K Singh

 To test protocol changes, compatibility and upgrade process, we need a good 
 way to test different versions of the product together and to test end-to-end 
 upgrade process.
 For example, for 0.8.2 to 0.8.3 test we want to check:
 * Can we start a cluster with a mix of 0.8.2 and 0.8.3 brokers?
 * Can a cluster of 0.8.3 brokers bump the protocol level one broker at a time?
 * Can 0.8.2 clients run against a cluster of 0.8.3 brokers?
 There are probably more questions. But an automated framework that can test 
 those and report results will be a good start.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1992) Following KAFKA-1697, checkEnoughReplicasReachOffset doesn't need to get requiredAcks

2015-03-06 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1992:

Attachment: KAFKA-1992_2015-03-06_13:34:20.patch

 Following KAFKA-1697, checkEnoughReplicasReachOffset doesn't need to get 
 requiredAcks
 -

 Key: KAFKA-1992
 URL: https://issues.apache.org/jira/browse/KAFKA-1992
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira
Assignee: Gwen Shapira
 Attachments: KAFKA-1992.patch, KAFKA-1992_2015-03-03_14:16:34.patch, 
 KAFKA-1992_2015-03-03_17:17:43.patch, KAFKA-1992_2015-03-06_13:34:20.patch


 Follow up from Jun's review:
 Should we just remove requiredAcks completely since 
 checkEnoughReplicasReachOffset() will only be called when requiredAcks is -1?
 Answer is: Yes, we should :)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1992) Following KAFKA-1697, checkEnoughReplicasReachOffset doesn't need to get requiredAcks

2015-03-06 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1992:

Attachment: KAFKA-1992_2015-03-06_13:36:32.patch

 Following KAFKA-1697, checkEnoughReplicasReachOffset doesn't need to get 
 requiredAcks
 -

 Key: KAFKA-1992
 URL: https://issues.apache.org/jira/browse/KAFKA-1992
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira
Assignee: Gwen Shapira
 Attachments: KAFKA-1992.patch, KAFKA-1992_2015-03-03_14:16:34.patch, 
 KAFKA-1992_2015-03-03_17:17:43.patch, KAFKA-1992_2015-03-06_13:34:20.patch, 
 KAFKA-1992_2015-03-06_13:36:32.patch


 Follow up from Jun's review:
 Should we just remove requiredAcks completely since 
 checkEnoughReplicasReachOffset() will only be called when requiredAcks is -1?
 Answer is: Yes, we should :)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 31591: Patch for KAFKA-1992

2015-03-06 Thread Gwen Shapira

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31591/
---

(Updated March 6, 2015, 9:36 p.m.)


Review request for kafka.


Bugs: KAFKA-1992
https://issues.apache.org/jira/browse/KAFKA-1992


Repository: kafka


Description (updated)
---

add logging per Jiangjie Qin comment


revert unintentional changes to log4j


merge with trunk


few small fixes suggested by Jun


tiny typo


Diffs (updated)
-

  core/src/main/scala/kafka/cluster/Partition.scala 
c4bf48a801007ebe7497077d2018d6dffe1677d4 
  core/src/main/scala/kafka/server/DelayedProduce.scala 
4d763bf05efb24a394662721292fc54d32467969 

Diff: https://reviews.apache.org/r/31591/diff/


Testing
---


Thanks,

Gwen Shapira



[jira] [Updated] (KAFKA-1992) Following KAFKA-1697, checkEnoughReplicasReachOffset doesn't need to get requiredAcks

2015-03-06 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1992:

Attachment: KAFKA-1992_2015-03-06_13:37:39.patch

 Following KAFKA-1697, checkEnoughReplicasReachOffset doesn't need to get 
 requiredAcks
 -

 Key: KAFKA-1992
 URL: https://issues.apache.org/jira/browse/KAFKA-1992
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira
Assignee: Gwen Shapira
 Attachments: KAFKA-1992.patch, KAFKA-1992_2015-03-03_14:16:34.patch, 
 KAFKA-1992_2015-03-03_17:17:43.patch, KAFKA-1992_2015-03-06_13:34:20.patch, 
 KAFKA-1992_2015-03-06_13:36:32.patch, KAFKA-1992_2015-03-06_13:37:39.patch


 Follow up from Jun's review:
 Should we just remove requiredAcks completely since 
 checkEnoughReplicasReachOffset() will only be called when requiredAcks is -1?
 Answer is: Yes, we should :)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 31591: Patch for KAFKA-1992

2015-03-06 Thread Gwen Shapira

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31591/
---

(Updated March 6, 2015, 9:37 p.m.)


Review request for kafka.


Bugs: KAFKA-1992
https://issues.apache.org/jira/browse/KAFKA-1992


Repository: kafka


Description (updated)
---

add logging per Jiangjie Qin comment


revert unintentional changes to log4j


merge with trunk


few small fixes suggested by Jun


tiny typo


formatting


Diffs (updated)
-

  core/src/main/scala/kafka/cluster/Partition.scala 
c4bf48a801007ebe7497077d2018d6dffe1677d4 
  core/src/main/scala/kafka/server/DelayedProduce.scala 
4d763bf05efb24a394662721292fc54d32467969 

Diff: https://reviews.apache.org/r/31591/diff/


Testing
---


Thanks,

Gwen Shapira



Re: Review Request 31650: Drag Coordinator and FetchManager out of KafkaConsumer, fix a bunch of consumer test issues

2015-03-06 Thread Onur Karaman


 On March 5, 2015, 11:39 p.m., Onur Karaman wrote:
  clients/src/test/java/org/apache/kafka/clients/consumer/internals/CoordinatorTest.java,
   line 83
  https://reviews.apache.org/r/31650/diff/2/?file=886350#file886350line83
 
  I think these scenarios should be split up into separate tests.
 
 Guozhang Wang wrote:
 The general law of defining unit test cases is by functionality instead 
 of scenarios, hence I think it is OK to group them in one test.

Grouping them means that if one of the earlier scenarios fail, then the later 
scenarios will not be tested. So we don't know if only that one scenario failed 
or if multiple later scenarios would have failed.


- Onur


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31650/#review75406
---


On March 5, 2015, 10:57 p.m., Guozhang Wang wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/31650/
 ---
 
 (Updated March 5, 2015, 10:57 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1910
 https://issues.apache.org/jira/browse/KAFKA-1910
 
 
 Repository: kafka
 
 
 Description
 ---
 
 See comments in KAFKA-1910;
 
 Updated RB includes unit test for Coordinator / FetchManager / Heartbeat and 
 a couple changes on MemoryRecords and test utils.
 
 
 Diffs
 -
 
   clients/src/main/java/org/apache/kafka/clients/CommonClientConfigs.java 
 06fcfe62cc1fe76f58540221698ef076fe150e96 
   clients/src/main/java/org/apache/kafka/clients/KafkaClient.java 
 8a3e55aaff7d8c26e56a8407166a4176c1da2644 
   clients/src/main/java/org/apache/kafka/clients/NetworkClient.java 
 a7fa4a9dfbcfbc4d9e9259630253cbcded158064 
   clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java 
 5fb21001abd77cac839bd724afa04e377a3e82aa 
   clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
 67ceb754a52c07143c69b053fe128b9e24060b99 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/internals/Coordinator.java
  PRE-CREATION 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/internals/FetchManager.java
  PRE-CREATION 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/internals/Heartbeat.java
  ee0751e4949120d114202c2299d49612a89b9d97 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/internals/SubscriptionState.java
  d41d3068c11d4b5c640467dc0ae1b7c20a8d128c 
   clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
 7397e565fd865214529ffccadd4222d835ac8110 
   clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
 122375c473bf73caf05299b9f5174c6b226ca863 
   
 clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java 
 ed9c63a6679e3aaf83d19fde19268553a4c107c2 
   clients/src/main/java/org/apache/kafka/common/network/Selector.java 
 6baad9366a1975dbaba1786da91efeaa38533319 
   clients/src/main/java/org/apache/kafka/common/protocol/Errors.java 
 ad2171f5417c93194f5f234bdc7fdd0b8d59a8a8 
   clients/src/main/java/org/apache/kafka/common/record/MemoryRecords.java 
 083e7a39249ab56a73a014b106876244d619f189 
   clients/src/main/java/org/apache/kafka/common/requests/FetchResponse.java 
 e67c4c8332cb1dd3d9cde5de687df7760045dfe6 
   
 clients/src/main/java/org/apache/kafka/common/requests/HeartbeatResponse.java 
 0057496228feeeccbc0c009a42f5268fa2cb8611 
   
 clients/src/main/java/org/apache/kafka/common/requests/JoinGroupRequest.java 
 8c50e9be534c61ecf56106bf2b68cf678ea50d66 
   
 clients/src/main/java/org/apache/kafka/common/requests/JoinGroupResponse.java 
 52b1803d8b558c1eeb978ba8821496c7d3c20a6b 
   
 clients/src/main/java/org/apache/kafka/common/requests/ListOffsetResponse.java
  cfac47a4a05dc8a535595542d93e55237b7d1e93 
   
 clients/src/main/java/org/apache/kafka/common/requests/MetadataResponse.java 
 90f31413d7d80a06c0af359009cc271aa0c67be3 
   
 clients/src/main/java/org/apache/kafka/common/requests/OffsetCommitResponse.java
  4d3b9ececee4b4c0b50ba99da2ddbbb15f9cc08d 
   
 clients/src/main/java/org/apache/kafka/common/requests/OffsetFetchResponse.java
  edbed5880dc44fc178737a5e298c106a00f38443 
   clients/src/main/java/org/apache/kafka/common/requests/ProduceResponse.java 
 a00dcdf15d1c7bac7228be140647bd7d849deb9b 
   clients/src/test/java/org/apache/kafka/clients/MockClient.java 
 8f1a7a625e4eeafa44bbf9e5cff987de86c949be 
   
 clients/src/test/java/org/apache/kafka/clients/consumer/internals/CoordinatorTest.java
  PRE-CREATION 
   
 clients/src/test/java/org/apache/kafka/clients/consumer/internals/FetchManagerTest.java
  PRE-CREATION 
   
 clients/src/test/java/org/apache/kafka/clients/consumer/internals/HeartbeatTest.java
  PRE-CREATION 
   
 

[jira] [Commented] (KAFKA-1992) Following KAFKA-1697, checkEnoughReplicasReachOffset doesn't need to get requiredAcks

2015-03-06 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350940#comment-14350940
 ] 

Gwen Shapira commented on KAFKA-1992:
-

Updated reviewboard https://reviews.apache.org/r/31591/diff/
 against branch trunk

 Following KAFKA-1697, checkEnoughReplicasReachOffset doesn't need to get 
 requiredAcks
 -

 Key: KAFKA-1992
 URL: https://issues.apache.org/jira/browse/KAFKA-1992
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira
Assignee: Gwen Shapira
 Attachments: KAFKA-1992.patch, KAFKA-1992_2015-03-03_14:16:34.patch, 
 KAFKA-1992_2015-03-03_17:17:43.patch, KAFKA-1992_2015-03-06_13:34:20.patch, 
 KAFKA-1992_2015-03-06_13:36:32.patch


 Follow up from Jun's review:
 Should we just remove requiredAcks completely since 
 checkEnoughReplicasReachOffset() will only be called when requiredAcks is -1?
 Answer is: Yes, we should :)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (KAFKA-1646) Improve consumer read performance for Windows

2015-03-06 Thread Honghai Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Honghai Chen updated KAFKA-1646:

Comment: was deleted

(was: Updated reviewboard  against branch origin/0.8.1)

 Improve consumer read performance for Windows
 -

 Key: KAFKA-1646
 URL: https://issues.apache.org/jira/browse/KAFKA-1646
 Project: Kafka
  Issue Type: Improvement
  Components: log
Affects Versions: 0.8.1.1
 Environment: Windows
Reporter: xueqiang wang
Assignee: xueqiang wang
  Labels: newbie, patch
 Attachments: Improve consumer read performance for Windows.patch, 
 KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
 KAFKA-1646_20141216_163008.patch, KAFKA-1646_20150306_002850.patch


 This patch is for Window platform only. In Windows platform, if there are 
 more than one replicas writing to disk, the segment log files will not be 
 consistent in disk and then consumer reading performance will be dropped down 
 greatly. This fix allocates more disk spaces when rolling a new segment, and 
 then it will improve the consumer reading performance in NTFS file system.
 This patch doesn't affect file allocation of other filesystems, for it only 
 adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1646) Improve consumer read performance for Windows

2015-03-06 Thread Honghai Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350101#comment-14350101
 ] 

Honghai Chen commented on KAFKA-1646:
-

Updated reviewboard  against branch origin/0.8.1

 Improve consumer read performance for Windows
 -

 Key: KAFKA-1646
 URL: https://issues.apache.org/jira/browse/KAFKA-1646
 Project: Kafka
  Issue Type: Improvement
  Components: log
Affects Versions: 0.8.1.1
 Environment: Windows
Reporter: xueqiang wang
Assignee: xueqiang wang
  Labels: newbie, patch
 Attachments: Improve consumer read performance for Windows.patch, 
 KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
 KAFKA-1646_20141216_163008.patch, KAFKA-1646_20150306_002850.patch


 This patch is for Window platform only. In Windows platform, if there are 
 more than one replicas writing to disk, the segment log files will not be 
 consistent in disk and then consumer reading performance will be dropped down 
 greatly. This fix allocates more disk spaces when rolling a new segment, and 
 then it will improve the consumer reading performance in NTFS file system.
 This patch doesn't affect file allocation of other filesystems, for it only 
 adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1646) Improve consumer read performance for Windows

2015-03-06 Thread Honghai Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Honghai Chen updated KAFKA-1646:

Attachment: KAFKA-1646_20150306_002850.patch

 Improve consumer read performance for Windows
 -

 Key: KAFKA-1646
 URL: https://issues.apache.org/jira/browse/KAFKA-1646
 Project: Kafka
  Issue Type: Improvement
  Components: log
Affects Versions: 0.8.1.1
 Environment: Windows
Reporter: xueqiang wang
Assignee: xueqiang wang
  Labels: newbie, patch
 Attachments: Improve consumer read performance for Windows.patch, 
 KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
 KAFKA-1646_20141216_163008.patch, KAFKA-1646_20150306_002850.patch


 This patch is for Window platform only. In Windows platform, if there are 
 more than one replicas writing to disk, the segment log files will not be 
 consistent in disk and then consumer reading performance will be dropped down 
 greatly. This fix allocates more disk spaces when rolling a new segment, and 
 then it will improve the consumer reading performance in NTFS file system.
 This patch doesn't affect file allocation of other filesystems, for it only 
 adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1646) Improve consumer read performance for Windows

2015-03-06 Thread Honghai Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350110#comment-14350110
 ] 

Honghai Chen commented on KAFKA-1646:
-

Updated reviewboard  against branch origin/0.8.1

 Improve consumer read performance for Windows
 -

 Key: KAFKA-1646
 URL: https://issues.apache.org/jira/browse/KAFKA-1646
 Project: Kafka
  Issue Type: Improvement
  Components: log
Affects Versions: 0.8.1.1
 Environment: Windows
Reporter: xueqiang wang
Assignee: xueqiang wang
  Labels: newbie, patch
 Attachments: Improve consumer read performance for Windows.patch, 
 KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
 KAFKA-1646_20141216_163008.patch, KAFKA-1646_20150306_002850.patch, 
 KAFKA-1646_20150306_003432.patch


 This patch is for Window platform only. In Windows platform, if there are 
 more than one replicas writing to disk, the segment log files will not be 
 consistent in disk and then consumer reading performance will be dropped down 
 greatly. This fix allocates more disk spaces when rolling a new segment, and 
 then it will improve the consumer reading performance in NTFS file system.
 This patch doesn't affect file allocation of other filesystems, for it only 
 adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1646) Improve consumer read performance for Windows

2015-03-06 Thread Honghai Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Honghai Chen updated KAFKA-1646:

Attachment: KAFKA-1646_20150306_003432.patch

 Improve consumer read performance for Windows
 -

 Key: KAFKA-1646
 URL: https://issues.apache.org/jira/browse/KAFKA-1646
 Project: Kafka
  Issue Type: Improvement
  Components: log
Affects Versions: 0.8.1.1
 Environment: Windows
Reporter: xueqiang wang
Assignee: xueqiang wang
  Labels: newbie, patch
 Attachments: Improve consumer read performance for Windows.patch, 
 KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
 KAFKA-1646_20141216_163008.patch, KAFKA-1646_20150306_002850.patch, 
 KAFKA-1646_20150306_003432.patch


 This patch is for Window platform only. In Windows platform, if there are 
 more than one replicas writing to disk, the segment log files will not be 
 consistent in disk and then consumer reading performance will be dropped down 
 greatly. This fix allocates more disk spaces when rolling a new segment, and 
 then it will improve the consumer reading performance in NTFS file system.
 This patch doesn't affect file allocation of other filesystems, for it only 
 adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1646) Improve consumer read performance for Windows

2015-03-06 Thread Honghai Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Honghai Chen updated KAFKA-1646:

Attachment: KAFKA-1646_20150306_003722.patch

 Improve consumer read performance for Windows
 -

 Key: KAFKA-1646
 URL: https://issues.apache.org/jira/browse/KAFKA-1646
 Project: Kafka
  Issue Type: Improvement
  Components: log
Affects Versions: 0.8.1.1
 Environment: Windows
Reporter: xueqiang wang
Assignee: xueqiang wang
  Labels: newbie, patch
 Attachments: Improve consumer read performance for Windows.patch, 
 KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
 KAFKA-1646_20141216_163008.patch, KAFKA-1646_20150306_002850.patch, 
 KAFKA-1646_20150306_003432.patch, KAFKA-1646_20150306_003722.patch


 This patch is for Window platform only. In Windows platform, if there are 
 more than one replicas writing to disk, the segment log files will not be 
 consistent in disk and then consumer reading performance will be dropped down 
 greatly. This fix allocates more disk spaces when rolling a new segment, and 
 then it will improve the consumer reading performance in NTFS file system.
 This patch doesn't affect file allocation of other filesystems, for it only 
 adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1646) Improve consumer read performance for Windows

2015-03-06 Thread Honghai Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350118#comment-14350118
 ] 

Honghai Chen commented on KAFKA-1646:
-

Updated reviewboard  against branch origin/0.8.1

 Improve consumer read performance for Windows
 -

 Key: KAFKA-1646
 URL: https://issues.apache.org/jira/browse/KAFKA-1646
 Project: Kafka
  Issue Type: Improvement
  Components: log
Affects Versions: 0.8.1.1
 Environment: Windows
Reporter: xueqiang wang
Assignee: xueqiang wang
  Labels: newbie, patch
 Attachments: Improve consumer read performance for Windows.patch, 
 KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
 KAFKA-1646_20141216_163008.patch, KAFKA-1646_20150306_002850.patch, 
 KAFKA-1646_20150306_003432.patch, KAFKA-1646_20150306_003722.patch, 
 KAFKA-1646_20150306_004014.patch


 This patch is for Window platform only. In Windows platform, if there are 
 more than one replicas writing to disk, the segment log files will not be 
 consistent in disk and then consumer reading performance will be dropped down 
 greatly. This fix allocates more disk spaces when rolling a new segment, and 
 then it will improve the consumer reading performance in NTFS file system.
 This patch doesn't affect file allocation of other filesystems, for it only 
 adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1646) Improve consumer read performance for Windows

2015-03-06 Thread Honghai Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Honghai Chen updated KAFKA-1646:

Attachment: KAFKA-1646_20150306_004321.patch

 Improve consumer read performance for Windows
 -

 Key: KAFKA-1646
 URL: https://issues.apache.org/jira/browse/KAFKA-1646
 Project: Kafka
  Issue Type: Improvement
  Components: log
Affects Versions: 0.8.1.1
 Environment: Windows
Reporter: xueqiang wang
Assignee: xueqiang wang
  Labels: newbie, patch
 Attachments: Improve consumer read performance for Windows.patch, 
 KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
 KAFKA-1646_20141216_163008.patch, KAFKA-1646_20150306_002850.patch, 
 KAFKA-1646_20150306_003432.patch, KAFKA-1646_20150306_003722.patch, 
 KAFKA-1646_20150306_004014.patch, KAFKA-1646_20150306_004321.patch


 This patch is for Window platform only. In Windows platform, if there are 
 more than one replicas writing to disk, the segment log files will not be 
 consistent in disk and then consumer reading performance will be dropped down 
 greatly. This fix allocates more disk spaces when rolling a new segment, and 
 then it will improve the consumer reading performance in NTFS file system.
 This patch doesn't affect file allocation of other filesystems, for it only 
 adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1646) Improve consumer read performance for Windows

2015-03-06 Thread Honghai Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350121#comment-14350121
 ] 

Honghai Chen commented on KAFKA-1646:
-

Updated reviewboard  against branch origin/0.8.1

 Improve consumer read performance for Windows
 -

 Key: KAFKA-1646
 URL: https://issues.apache.org/jira/browse/KAFKA-1646
 Project: Kafka
  Issue Type: Improvement
  Components: log
Affects Versions: 0.8.1.1
 Environment: Windows
Reporter: xueqiang wang
Assignee: xueqiang wang
  Labels: newbie, patch
 Attachments: Improve consumer read performance for Windows.patch, 
 KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
 KAFKA-1646_20141216_163008.patch, KAFKA-1646_20150306_002850.patch, 
 KAFKA-1646_20150306_003432.patch, KAFKA-1646_20150306_003722.patch, 
 KAFKA-1646_20150306_004014.patch, KAFKA-1646_20150306_004321.patch


 This patch is for Window platform only. In Windows platform, if there are 
 more than one replicas writing to disk, the segment log files will not be 
 consistent in disk and then consumer reading performance will be dropped down 
 greatly. This fix allocates more disk spaces when rolling a new segment, and 
 then it will improve the consumer reading performance in NTFS file system.
 This patch doesn't affect file allocation of other filesystems, for it only 
 adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 29091: Improve 1646 fix by truncate extra space when clean shutdown

2015-03-06 Thread Qianlin Xia

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29091/
---

(Updated March 6, 2015, 8:48 a.m.)


Review request for kafka.


Summary (updated)
-

Improve 1646 fix by truncate extra space when clean shutdown


Bugs: KAFKA-1646
https://issues.apache.org/jira/browse/KAFKA-1646


Repository: kafka


Description (updated)
---

KAFKA-1158 run rat is not needed this is documented now in the release not part 
of the server running


kafka-1244,kafka-1246,kafka-1249; various gradle issues for release; patched by 
Jun Rao; reviewed by Neha Narkhede


KAFKA-1263 Snazzy up the README markdown for better visibility on github; 
patched by Joe Stein; reviewed by Neha Narkhede


KAFKA-1245 the jar files and pom are not being signed so nexus is failing to 
publish them patch by Joe Stein; Reviewed by Jun Rao


KAFKA-1274 gradle.properties needs the variables used in the build.gradle patch 
by Joe Stein; Reviewed by Jun Rao


KAFKA-1254 remove vestigial sbt patch by Joe Stein; reviewed by Jun Rao


kafka-1271; controller logs exceptions during ZK session expiration; patched by 
Jun Rao; reviewed by Guozhang Wang and Jay kreps


auto rebalance last commit


KAFKA-1289 Misc. nitpicks in log cleaner for new 0.8.1 features patch by Jay 
Kreps, reviewed by Sriram Subramanian and Jun Rao


KAFKA-1288 add enclosing dir in release tar gz patch by Jun Rao, reviewed by 
Neha Narkhede


KAFKA-1311 Add a flag to turn off delete topic until it is stable; reviewed by 
Joel and Guozhang


KAFKA-1315 log.dirs property in KafkaServer intolerant of trailing slash; 
reviewed by Neha Narkhede and Guozhang Wang


kafka-1319; kafka jar doesn't depend on metrics-annotation any more; patched by 
Jun Rao; reviewed by Neha Narkhede


KAFKA-1317 KafkaServer 0.8.1 not responding to .shutdown() cleanly, possibly 
related to TopicDeletionManager or MetricsMeter state; reviewed by Neha Narkhede


KAFKA-1317 follow up fix


KAFKA-1350 Fix excessive state change logging;reviewed by Jun,Joel,Guozhang and 
Timothy


KAFKA-1358 Broker throws exception when reconnecting to zookeeper; reviewed by 
Neha Narkhede


KAFKA-1358: Fixing minor log4j statement


KAFKA-1373; Set first dirty (uncompacted) offset to first offset of the log if 
no checkpoint exists. Reviewed by Timothy Chen and Neha Narkhede.


KAFKA-1323; Fix regression due to KAFKA-1315 (support for relative 
directories in log.dirs property broke). Patched by Timothy Chen and 
Guozhang Wang; reviewed by Joel Koshy, Neha Narkhede and Jun Rao.


KAFKA-1356 Topic metadata requests takes too long to process; reviewed by Joel 
Koshy, Neha Narkhede, Jun Rao and Guozhang Wang


KAFKA-1365; Second Manual preferred replica leader election command always 
fails; reviewed by Joel Koshy.


KAFKA-1356 (Follow-up) patch to clean up metadata cache api; reviewed by Jun Rao


KAFKA-1362; Publish sources and javadoc jars; (also removed Scala 
2.8.2-specific actions). Reviewed by Jun Rao and Joe Stein


KAFKA-1355; Avoid sending all topic metadata on state changes. Reviewed by Neha 
Narkhede, Timothy Chen and Guozhang Wang.


KAFKA-1398 dynamic config changes are broken.


KAFKA-1398 Dynamic config follow-on-comments.


KAFKA-1327 Add log cleaner metrics.


KAFKA-1356; follow-up - return unknown topic partition on non-existent topic if 
auto.create is off; reviewed by Timothy Chen, Neha Narkhede and Jun Rao.


KAFKA-1327; Log cleaner metrics follow-up patch to reset dirtiest log cleanable 
ratio; reviewed by Jun Rao


bump kafka version to 0.8.1.1 in gradle.properties patch by Joe Stein reviewed 
by Joel Koshy


KAFKA-1308; Publish jar of test utilities to Maven. Jun Rao and Jakob Homan; 
reviewed by Neha Narkhede.


Improve 1646 fix by truncate extra space when clean shutdown


Merge branch '0.8.1' of http://git-wip-us.apache.org/repos/asf/kafka into 
Branch_0.8.1.1


Diffs (updated)
-

  LICENSE cb1800b0c39afc60a3dbf8249ba98f27a63467f3 
  README-sbt.md 10b8d2523605e8c6b0854f11e37d6e9e24d2814f 
  README.md 9b272b52c8b65668f9f2c9aa15b95b7441735936 
  bin/kafka-run-class.sh 75a3fc42a2e41977fa0d19a53cbc31e7538b8283 
  bin/run-rat.sh 1b7bc312e8b42aca60e630f2c39b976ee8352a77 
  build.gradle 858d297b9e8bf8a2bca54c4817f9ca2affd0d3f2 
  clients/build.sbt ca3c8ee3d7e56cefec2ecf8f21b237615c9bd759 
  config/log4j.properties 1ab850772a965d1f4301678cfe58e3901a11b7e0 
  config/server.properties 2ffe0ebccf1092ddf614b2fcdc327c607dfd685a 
  contrib/LICENSE PRE-CREATION 
  contrib/NOTICE PRE-CREATION 
  contrib/hadoop-consumer/LICENSE 6b0b1270ff0ca8f03867efcd09ba6ddb6392b1e1 
  contrib/hadoop-consumer/build.sbt 02e95eb8ca2c7a97a1f6bef88c4e044ea1f99539 
  contrib/hadoop-producer/LICENSE 6b0b1270ff0ca8f03867efcd09ba6ddb6392b1e1 
  contrib/hadoop-producer/build.sbt 02e95eb8ca2c7a97a1f6bef88c4e044ea1f99539 
  core/build.sbt 

Re: Review Request 29091: Improve 1646 fix by truncate extra space when clean shutdown

2015-03-06 Thread Qianlin Xia

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29091/
---

(Updated March 6, 2015, 8:51 a.m.)


Review request for kafka.


Bugs: KAFKA-1646
https://issues.apache.org/jira/browse/KAFKA-1646


Repository: kafka


Description (updated)
---

Merge branch '0.8.1' of http://git-wip-us.apache.org/repos/asf/kafka into 
Branch_0.8.1.1


Diffs (updated)
-

  core/src/main/scala/kafka/log/FileMessageSet.scala 
e1f8b979c3e6f62ea235bd47bc1587a1291443f9 
  core/src/main/scala/kafka/log/Log.scala 
46df8d99d977a3b010a9b9f4698187fa9bfb2498 
  core/src/main/scala/kafka/log/LogManager.scala 
7cee5435b23fcd0d76f531004911a2ca499df4f8 
  core/src/main/scala/kafka/log/LogSegment.scala 
0d6926ea105a99c9ff2cfc9ea6440f2f2d37bde8 
  core/src/main/scala/kafka/utils/Utils.scala 
a89b0463685e6224d263bc9177075e1bb6b93d04 

Diff: https://reviews.apache.org/r/29091/diff/


Testing
---


Thanks,

Qianlin Xia



[jira] [Updated] (KAFKA-1646) Improve consumer read performance for Windows

2015-03-06 Thread Honghai Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Honghai Chen updated KAFKA-1646:

Attachment: (was: KAFKA-1646_20150306_002850.patch)

 Improve consumer read performance for Windows
 -

 Key: KAFKA-1646
 URL: https://issues.apache.org/jira/browse/KAFKA-1646
 Project: Kafka
  Issue Type: Improvement
  Components: log
Affects Versions: 0.8.1.1
 Environment: Windows
Reporter: xueqiang wang
Assignee: xueqiang wang
  Labels: newbie, patch
 Attachments: Improve consumer read performance for Windows.patch, 
 KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
 KAFKA-1646_20141216_163008.patch, KAFKA-1646_20150306_004321.patch


 This patch is for Window platform only. In Windows platform, if there are 
 more than one replicas writing to disk, the segment log files will not be 
 consistent in disk and then consumer reading performance will be dropped down 
 greatly. This fix allocates more disk spaces when rolling a new segment, and 
 then it will improve the consumer reading performance in NTFS file system.
 This patch doesn't affect file allocation of other filesystems, for it only 
 adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1646) Improve consumer read performance for Windows

2015-03-06 Thread Honghai Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Honghai Chen updated KAFKA-1646:

Attachment: (was: KAFKA-1646_20150306_004321.patch)

 Improve consumer read performance for Windows
 -

 Key: KAFKA-1646
 URL: https://issues.apache.org/jira/browse/KAFKA-1646
 Project: Kafka
  Issue Type: Improvement
  Components: log
Affects Versions: 0.8.1.1
 Environment: Windows
Reporter: xueqiang wang
Assignee: xueqiang wang
  Labels: newbie, patch
 Attachments: Improve consumer read performance for Windows.patch, 
 KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
 KAFKA-1646_20141216_163008.patch


 This patch is for Window platform only. In Windows platform, if there are 
 more than one replicas writing to disk, the segment log files will not be 
 consistent in disk and then consumer reading performance will be dropped down 
 greatly. This fix allocates more disk spaces when rolling a new segment, and 
 then it will improve the consumer reading performance in NTFS file system.
 This patch doesn't affect file allocation of other filesystems, for it only 
 adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1646) Improve consumer read performance for Windows

2015-03-06 Thread Honghai Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Honghai Chen updated KAFKA-1646:

Attachment: (was: KAFKA-1646_20150306_004014.patch)

 Improve consumer read performance for Windows
 -

 Key: KAFKA-1646
 URL: https://issues.apache.org/jira/browse/KAFKA-1646
 Project: Kafka
  Issue Type: Improvement
  Components: log
Affects Versions: 0.8.1.1
 Environment: Windows
Reporter: xueqiang wang
Assignee: xueqiang wang
  Labels: newbie, patch
 Attachments: Improve consumer read performance for Windows.patch, 
 KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
 KAFKA-1646_20141216_163008.patch, KAFKA-1646_20150306_004321.patch


 This patch is for Window platform only. In Windows platform, if there are 
 more than one replicas writing to disk, the segment log files will not be 
 consistent in disk and then consumer reading performance will be dropped down 
 greatly. This fix allocates more disk spaces when rolling a new segment, and 
 then it will improve the consumer reading performance in NTFS file system.
 This patch doesn't affect file allocation of other filesystems, for it only 
 adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (KAFKA-1646) Improve consumer read performance for Windows

2015-03-06 Thread Honghai Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Honghai Chen updated KAFKA-1646:

Comment: was deleted

(was: Updated reviewboard  against branch origin/0.8.1)

 Improve consumer read performance for Windows
 -

 Key: KAFKA-1646
 URL: https://issues.apache.org/jira/browse/KAFKA-1646
 Project: Kafka
  Issue Type: Improvement
  Components: log
Affects Versions: 0.8.1.1
 Environment: Windows
Reporter: xueqiang wang
Assignee: xueqiang wang
  Labels: newbie, patch
 Attachments: Improve consumer read performance for Windows.patch, 
 KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
 KAFKA-1646_20141216_163008.patch, KAFKA-1646_20150306_004321.patch


 This patch is for Window platform only. In Windows platform, if there are 
 more than one replicas writing to disk, the segment log files will not be 
 consistent in disk and then consumer reading performance will be dropped down 
 greatly. This fix allocates more disk spaces when rolling a new segment, and 
 then it will improve the consumer reading performance in NTFS file system.
 This patch doesn't affect file allocation of other filesystems, for it only 
 adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1646) Improve consumer read performance for Windows

2015-03-06 Thread Honghai Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Honghai Chen updated KAFKA-1646:

Attachment: (was: KAFKA-1646_20150306_003432.patch)

 Improve consumer read performance for Windows
 -

 Key: KAFKA-1646
 URL: https://issues.apache.org/jira/browse/KAFKA-1646
 Project: Kafka
  Issue Type: Improvement
  Components: log
Affects Versions: 0.8.1.1
 Environment: Windows
Reporter: xueqiang wang
Assignee: xueqiang wang
  Labels: newbie, patch
 Attachments: Improve consumer read performance for Windows.patch, 
 KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
 KAFKA-1646_20141216_163008.patch, KAFKA-1646_20150306_004321.patch


 This patch is for Window platform only. In Windows platform, if there are 
 more than one replicas writing to disk, the segment log files will not be 
 consistent in disk and then consumer reading performance will be dropped down 
 greatly. This fix allocates more disk spaces when rolling a new segment, and 
 then it will improve the consumer reading performance in NTFS file system.
 This patch doesn't affect file allocation of other filesystems, for it only 
 adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1646) Improve consumer read performance for Windows

2015-03-06 Thread Honghai Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14350126#comment-14350126
 ] 

Honghai Chen commented on KAFKA-1646:
-

Updated reviewboard  against branch origin/0.8.1

 Improve consumer read performance for Windows
 -

 Key: KAFKA-1646
 URL: https://issues.apache.org/jira/browse/KAFKA-1646
 Project: Kafka
  Issue Type: Improvement
  Components: log
Affects Versions: 0.8.1.1
 Environment: Windows
Reporter: xueqiang wang
Assignee: xueqiang wang
  Labels: newbie, patch
 Attachments: Improve consumer read performance for Windows.patch, 
 KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
 KAFKA-1646_20141216_163008.patch, KAFKA-1646_20150306_005526.patch


 This patch is for Window platform only. In Windows platform, if there are 
 more than one replicas writing to disk, the segment log files will not be 
 consistent in disk and then consumer reading performance will be dropped down 
 greatly. This fix allocates more disk spaces when rolling a new segment, and 
 then it will improve the consumer reading performance in NTFS file system.
 This patch doesn't affect file allocation of other filesystems, for it only 
 adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (KAFKA-1646) Improve consumer read performance for Windows

2015-03-06 Thread Honghai Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Honghai Chen updated KAFKA-1646:

Comment: was deleted

(was: Updated reviewboard  against branch origin/0.8.1)

 Improve consumer read performance for Windows
 -

 Key: KAFKA-1646
 URL: https://issues.apache.org/jira/browse/KAFKA-1646
 Project: Kafka
  Issue Type: Improvement
  Components: log
Affects Versions: 0.8.1.1
 Environment: Windows
Reporter: xueqiang wang
Assignee: xueqiang wang
  Labels: newbie, patch
 Attachments: Improve consumer read performance for Windows.patch, 
 KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
 KAFKA-1646_20141216_163008.patch, KAFKA-1646_20150306_005526.patch


 This patch is for Window platform only. In Windows platform, if there are 
 more than one replicas writing to disk, the segment log files will not be 
 consistent in disk and then consumer reading performance will be dropped down 
 greatly. This fix allocates more disk spaces when rolling a new segment, and 
 then it will improve the consumer reading performance in NTFS file system.
 This patch doesn't affect file allocation of other filesystems, for it only 
 adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (KAFKA-1646) Improve consumer read performance for Windows

2015-03-06 Thread Honghai Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Honghai Chen updated KAFKA-1646:

Comment: was deleted

(was: Updated reviewboard  against branch origin/0.8.1)

 Improve consumer read performance for Windows
 -

 Key: KAFKA-1646
 URL: https://issues.apache.org/jira/browse/KAFKA-1646
 Project: Kafka
  Issue Type: Improvement
  Components: log
Affects Versions: 0.8.1.1
 Environment: Windows
Reporter: xueqiang wang
Assignee: xueqiang wang
  Labels: newbie, patch
 Attachments: Improve consumer read performance for Windows.patch, 
 KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
 KAFKA-1646_20141216_163008.patch, KAFKA-1646_20150306_005526.patch


 This patch is for Window platform only. In Windows platform, if there are 
 more than one replicas writing to disk, the segment log files will not be 
 consistent in disk and then consumer reading performance will be dropped down 
 greatly. This fix allocates more disk spaces when rolling a new segment, and 
 then it will improve the consumer reading performance in NTFS file system.
 This patch doesn't affect file allocation of other filesystems, for it only 
 adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (KAFKA-1646) Improve consumer read performance for Windows

2015-03-06 Thread Honghai Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Honghai Chen updated KAFKA-1646:

Comment: was deleted

(was: Updated reviewboard  against branch origin/0.8.1)

 Improve consumer read performance for Windows
 -

 Key: KAFKA-1646
 URL: https://issues.apache.org/jira/browse/KAFKA-1646
 Project: Kafka
  Issue Type: Improvement
  Components: log
Affects Versions: 0.8.1.1
 Environment: Windows
Reporter: xueqiang wang
Assignee: xueqiang wang
  Labels: newbie, patch
 Attachments: Improve consumer read performance for Windows.patch, 
 KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
 KAFKA-1646_20141216_163008.patch, KAFKA-1646_20150306_005526.patch


 This patch is for Window platform only. In Windows platform, if there are 
 more than one replicas writing to disk, the segment log files will not be 
 consistent in disk and then consumer reading performance will be dropped down 
 greatly. This fix allocates more disk spaces when rolling a new segment, and 
 then it will improve the consumer reading performance in NTFS file system.
 This patch doesn't affect file allocation of other filesystems, for it only 
 adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1646) Improve consumer read performance for Windows

2015-03-06 Thread Honghai Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Honghai Chen updated KAFKA-1646:

Attachment: KAFKA-1646_20150306_005526.patch

 Improve consumer read performance for Windows
 -

 Key: KAFKA-1646
 URL: https://issues.apache.org/jira/browse/KAFKA-1646
 Project: Kafka
  Issue Type: Improvement
  Components: log
Affects Versions: 0.8.1.1
 Environment: Windows
Reporter: xueqiang wang
Assignee: xueqiang wang
  Labels: newbie, patch
 Attachments: Improve consumer read performance for Windows.patch, 
 KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
 KAFKA-1646_20141216_163008.patch, KAFKA-1646_20150306_005526.patch


 This patch is for Window platform only. In Windows platform, if there are 
 more than one replicas writing to disk, the segment log files will not be 
 consistent in disk and then consumer reading performance will be dropped down 
 greatly. This fix allocates more disk spaces when rolling a new segment, and 
 then it will improve the consumer reading performance in NTFS file system.
 This patch doesn't affect file allocation of other filesystems, for it only 
 adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)