[jira] [Commented] (KAFKA-291) Add builder to create configs for consumer and broker

2013-12-30 Thread Swapnil Ghike (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13859341#comment-13859341
 ] 

Swapnil Ghike commented on KAFKA-291:
-

Not sure if the ConfigBuilder is going to reduce complexity. The strings like 
localhost:2181 will generally not be hardcoded in java code and will be 
passed from some config map, this passing of config values will have the same 
caveats. 

 Add builder to create configs for consumer and broker
 -

 Key: KAFKA-291
 URL: https://issues.apache.org/jira/browse/KAFKA-291
 Project: Kafka
  Issue Type: Improvement
  Components: core
Affects Versions: 0.7
Reporter: John Wang
 Attachments: builderPatch.diff


 Creating Consumer and Producer can be cumbersome because you have to remember 
 the exact string for the property to be set. And since these are just 
 strings, IDEs cannot really help.
 This patch contains builders that help with this.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (KAFKA-1152) ReplicaManager's handling of the leaderAndIsrRequest should gracefully handle leader == -1

2013-11-28 Thread Swapnil Ghike (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Ghike updated KAFKA-1152:
-

Attachment: KAFKA-1152_2013-11-28_10:19:05.patch

 ReplicaManager's handling of the leaderAndIsrRequest should gracefully handle 
 leader == -1
 --

 Key: KAFKA-1152
 URL: https://issues.apache.org/jira/browse/KAFKA-1152
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Fix For: 0.8.1

 Attachments: KAFKA-1152.patch, KAFKA-1152_2013-11-28_10:19:05.patch


 If a partition is created with replication factor 1, then the controller can 
 set the partition's leader to -1 in leaderAndIsrRequest when the only replica 
 of the partition is being bounced. 
 The handling of this request with a leader == -1 throws an exception on the 
 ReplicaManager which prevents the addition of fetchers for the remaining 
 partitions in the leaderAndIsrRequest.
 After the replica is bounced, the replica first receives a 
 leaderAndIsrRequest with leader == -1, then it receives another 
 leaderAndIsrRequest with the correct leader (which is the replica itself) due 
 to OfflinePartition to OnlinePartition state change. 
 In handling the first request, ReplicaManager should ignore the partition for 
 which the request has leader == -1, and continue addition of fetchers for the 
 remaining partitions. The next leaderAndIsrRequest will take care of setting 
 the correct leader for that partition.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (KAFKA-1152) ReplicaManager's handling of the leaderAndIsrRequest should gracefully handle leader == -1

2013-11-28 Thread Swapnil Ghike (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13835022#comment-13835022
 ] 

Swapnil Ghike commented on KAFKA-1152:
--

Updated reviewboard https://reviews.apache.org/r/15901/
 against branch origin/trunk

 ReplicaManager's handling of the leaderAndIsrRequest should gracefully handle 
 leader == -1
 --

 Key: KAFKA-1152
 URL: https://issues.apache.org/jira/browse/KAFKA-1152
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Fix For: 0.8.1

 Attachments: KAFKA-1152.patch, KAFKA-1152_2013-11-28_10:19:05.patch


 If a partition is created with replication factor 1, then the controller can 
 set the partition's leader to -1 in leaderAndIsrRequest when the only replica 
 of the partition is being bounced. 
 The handling of this request with a leader == -1 throws an exception on the 
 ReplicaManager which prevents the addition of fetchers for the remaining 
 partitions in the leaderAndIsrRequest.
 After the replica is bounced, the replica first receives a 
 leaderAndIsrRequest with leader == -1, then it receives another 
 leaderAndIsrRequest with the correct leader (which is the replica itself) due 
 to OfflinePartition to OnlinePartition state change. 
 In handling the first request, ReplicaManager should ignore the partition for 
 which the request has leader == -1, and continue addition of fetchers for the 
 remaining partitions. The next leaderAndIsrRequest will take care of setting 
 the correct leader for that partition.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (KAFKA-1152) ReplicaManager's handling of the leaderAndIsrRequest should gracefully handle leader == -1

2013-11-28 Thread Swapnil Ghike (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Ghike updated KAFKA-1152:
-

Attachment: KAFKA-1152.patch

 ReplicaManager's handling of the leaderAndIsrRequest should gracefully handle 
 leader == -1
 --

 Key: KAFKA-1152
 URL: https://issues.apache.org/jira/browse/KAFKA-1152
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Fix For: 0.8.1

 Attachments: KAFKA-1152.patch, KAFKA-1152.patch, 
 KAFKA-1152_2013-11-28_10:19:05.patch


 If a partition is created with replication factor 1, then the controller can 
 set the partition's leader to -1 in leaderAndIsrRequest when the only replica 
 of the partition is being bounced. 
 The handling of this request with a leader == -1 throws an exception on the 
 ReplicaManager which prevents the addition of fetchers for the remaining 
 partitions in the leaderAndIsrRequest.
 After the replica is bounced, the replica first receives a 
 leaderAndIsrRequest with leader == -1, then it receives another 
 leaderAndIsrRequest with the correct leader (which is the replica itself) due 
 to OfflinePartition to OnlinePartition state change. 
 In handling the first request, ReplicaManager should ignore the partition for 
 which the request has leader == -1, and continue addition of fetchers for the 
 remaining partitions. The next leaderAndIsrRequest will take care of setting 
 the correct leader for that partition.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (KAFKA-1152) ReplicaManager's handling of the leaderAndIsrRequest should gracefully handle leader == -1

2013-11-28 Thread Swapnil Ghike (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13835198#comment-13835198
 ] 

Swapnil Ghike commented on KAFKA-1152:
--

Created reviewboard https://reviews.apache.org/r/15915/
 against branch origin/trunk

 ReplicaManager's handling of the leaderAndIsrRequest should gracefully handle 
 leader == -1
 --

 Key: KAFKA-1152
 URL: https://issues.apache.org/jira/browse/KAFKA-1152
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Fix For: 0.8.1

 Attachments: KAFKA-1152.patch, KAFKA-1152.patch, 
 KAFKA-1152_2013-11-28_10:19:05.patch


 If a partition is created with replication factor 1, then the controller can 
 set the partition's leader to -1 in leaderAndIsrRequest when the only replica 
 of the partition is being bounced. 
 The handling of this request with a leader == -1 throws an exception on the 
 ReplicaManager which prevents the addition of fetchers for the remaining 
 partitions in the leaderAndIsrRequest.
 After the replica is bounced, the replica first receives a 
 leaderAndIsrRequest with leader == -1, then it receives another 
 leaderAndIsrRequest with the correct leader (which is the replica itself) due 
 to OfflinePartition to OnlinePartition state change. 
 In handling the first request, ReplicaManager should ignore the partition for 
 which the request has leader == -1, and continue addition of fetchers for the 
 remaining partitions. The next leaderAndIsrRequest will take care of setting 
 the correct leader for that partition.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (KAFKA-1152) ReplicaManager's handling of the leaderAndIsrRequest should gracefully handle leader == -1

2013-11-28 Thread Swapnil Ghike (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13835205#comment-13835205
 ] 

Swapnil Ghike commented on KAFKA-1152:
--

Updated reviewboard https://reviews.apache.org/r/15901/
 against branch origin/trunk

 ReplicaManager's handling of the leaderAndIsrRequest should gracefully handle 
 leader == -1
 --

 Key: KAFKA-1152
 URL: https://issues.apache.org/jira/browse/KAFKA-1152
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Fix For: 0.8.1

 Attachments: KAFKA-1152.patch, KAFKA-1152_2013-11-28_10:19:05.patch, 
 KAFKA-1152_2013-11-28_22:40:55.patch


 If a partition is created with replication factor 1, then the controller can 
 set the partition's leader to -1 in leaderAndIsrRequest when the only replica 
 of the partition is being bounced. 
 The handling of this request with a leader == -1 throws an exception on the 
 ReplicaManager which prevents the addition of fetchers for the remaining 
 partitions in the leaderAndIsrRequest.
 After the replica is bounced, the replica first receives a 
 leaderAndIsrRequest with leader == -1, then it receives another 
 leaderAndIsrRequest with the correct leader (which is the replica itself) due 
 to OfflinePartition to OnlinePartition state change. 
 In handling the first request, ReplicaManager should ignore the partition for 
 which the request has leader == -1, and continue addition of fetchers for the 
 remaining partitions. The next leaderAndIsrRequest will take care of setting 
 the correct leader for that partition.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (KAFKA-1152) ReplicaManager's handling of the leaderAndIsrRequest should gracefully handle leader == -1

2013-11-28 Thread Swapnil Ghike (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Ghike updated KAFKA-1152:
-

Attachment: KAFKA-1152_2013-11-28_22:40:55.patch

 ReplicaManager's handling of the leaderAndIsrRequest should gracefully handle 
 leader == -1
 --

 Key: KAFKA-1152
 URL: https://issues.apache.org/jira/browse/KAFKA-1152
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Fix For: 0.8.1

 Attachments: KAFKA-1152.patch, KAFKA-1152_2013-11-28_10:19:05.patch, 
 KAFKA-1152_2013-11-28_22:40:55.patch


 If a partition is created with replication factor 1, then the controller can 
 set the partition's leader to -1 in leaderAndIsrRequest when the only replica 
 of the partition is being bounced. 
 The handling of this request with a leader == -1 throws an exception on the 
 ReplicaManager which prevents the addition of fetchers for the remaining 
 partitions in the leaderAndIsrRequest.
 After the replica is bounced, the replica first receives a 
 leaderAndIsrRequest with leader == -1, then it receives another 
 leaderAndIsrRequest with the correct leader (which is the replica itself) due 
 to OfflinePartition to OnlinePartition state change. 
 In handling the first request, ReplicaManager should ignore the partition for 
 which the request has leader == -1, and continue addition of fetchers for the 
 remaining partitions. The next leaderAndIsrRequest will take care of setting 
 the correct leader for that partition.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (KAFKA-1152) ReplicaManager's handling of the leaderAndIsrRequest should gracefully handle leader == -1

2013-11-28 Thread Swapnil Ghike (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Ghike updated KAFKA-1152:
-

Attachment: (was: KAFKA-1152.patch)

 ReplicaManager's handling of the leaderAndIsrRequest should gracefully handle 
 leader == -1
 --

 Key: KAFKA-1152
 URL: https://issues.apache.org/jira/browse/KAFKA-1152
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Fix For: 0.8.1

 Attachments: KAFKA-1152.patch, KAFKA-1152_2013-11-28_10:19:05.patch, 
 KAFKA-1152_2013-11-28_22:40:55.patch


 If a partition is created with replication factor 1, then the controller can 
 set the partition's leader to -1 in leaderAndIsrRequest when the only replica 
 of the partition is being bounced. 
 The handling of this request with a leader == -1 throws an exception on the 
 ReplicaManager which prevents the addition of fetchers for the remaining 
 partitions in the leaderAndIsrRequest.
 After the replica is bounced, the replica first receives a 
 leaderAndIsrRequest with leader == -1, then it receives another 
 leaderAndIsrRequest with the correct leader (which is the replica itself) due 
 to OfflinePartition to OnlinePartition state change. 
 In handling the first request, ReplicaManager should ignore the partition for 
 which the request has leader == -1, and continue addition of fetchers for the 
 remaining partitions. The next leaderAndIsrRequest will take care of setting 
 the correct leader for that partition.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (KAFKA-1152) ReplicaManager's handling of the leaderAndIsrRequest should gracefully handle leader == -1

2013-11-27 Thread Swapnil Ghike (JIRA)
Swapnil Ghike created KAFKA-1152:


 Summary: ReplicaManager's handling of the leaderAndIsrRequest 
should gracefully handle leader == -1
 Key: KAFKA-1152
 URL: https://issues.apache.org/jira/browse/KAFKA-1152
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Fix For: 0.8.1


If a partition is created with replication factor 1, then the controller can 
set the partition's leader to -1 in leaderAndIsrRequest when the only replica 
of the partition is being bounced. 

The handling of this request with a leader == -1 throws an exception on the 
ReplicaManager which prevents the addition of fetchers for the remaining 
partitions in the leaderAndIsrRequest.

After the replica is bounced, the replica first receives a leaderAndIsrRequest 
with leader == -1, then it receives another leaderAndIsrRequest with the 
correct leader (which is the replica itself) due to OfflinePartition to 
OnlinePartition state change. 

In handling the first request, ReplicaManager should ignore the partition for 
which the request has leader == -1, and continue addition of fetchers for the 
remaining partitions. The next leaderAndIsrRequest will take care of setting 
the correct leader for that partition.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (KAFKA-1152) ReplicaManager's handling of the leaderAndIsrRequest should gracefully handle leader == -1

2013-11-27 Thread Swapnil Ghike (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Ghike updated KAFKA-1152:
-

Attachment: KAFKA-1152.patch

 ReplicaManager's handling of the leaderAndIsrRequest should gracefully handle 
 leader == -1
 --

 Key: KAFKA-1152
 URL: https://issues.apache.org/jira/browse/KAFKA-1152
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Fix For: 0.8.1

 Attachments: KAFKA-1152.patch


 If a partition is created with replication factor 1, then the controller can 
 set the partition's leader to -1 in leaderAndIsrRequest when the only replica 
 of the partition is being bounced. 
 The handling of this request with a leader == -1 throws an exception on the 
 ReplicaManager which prevents the addition of fetchers for the remaining 
 partitions in the leaderAndIsrRequest.
 After the replica is bounced, the replica first receives a 
 leaderAndIsrRequest with leader == -1, then it receives another 
 leaderAndIsrRequest with the correct leader (which is the replica itself) due 
 to OfflinePartition to OnlinePartition state change. 
 In handling the first request, ReplicaManager should ignore the partition for 
 which the request has leader == -1, and continue addition of fetchers for the 
 remaining partitions. The next leaderAndIsrRequest will take care of setting 
 the correct leader for that partition.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (KAFKA-1152) ReplicaManager's handling of the leaderAndIsrRequest should gracefully handle leader == -1

2013-11-27 Thread Swapnil Ghike (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13834544#comment-13834544
 ] 

Swapnil Ghike commented on KAFKA-1152:
--

Created reviewboard https://reviews.apache.org/r/15901/
 against branch origin/trunk

 ReplicaManager's handling of the leaderAndIsrRequest should gracefully handle 
 leader == -1
 --

 Key: KAFKA-1152
 URL: https://issues.apache.org/jira/browse/KAFKA-1152
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Fix For: 0.8.1

 Attachments: KAFKA-1152.patch


 If a partition is created with replication factor 1, then the controller can 
 set the partition's leader to -1 in leaderAndIsrRequest when the only replica 
 of the partition is being bounced. 
 The handling of this request with a leader == -1 throws an exception on the 
 ReplicaManager which prevents the addition of fetchers for the remaining 
 partitions in the leaderAndIsrRequest.
 After the replica is bounced, the replica first receives a 
 leaderAndIsrRequest with leader == -1, then it receives another 
 leaderAndIsrRequest with the correct leader (which is the replica itself) due 
 to OfflinePartition to OnlinePartition state change. 
 In handling the first request, ReplicaManager should ignore the partition for 
 which the request has leader == -1, and continue addition of fetchers for the 
 remaining partitions. The next leaderAndIsrRequest will take care of setting 
 the correct leader for that partition.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (KAFKA-1135) Code cleanup - use Json.encode() to write json data to zk

2013-11-25 Thread Swapnil Ghike (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13832037#comment-13832037
 ] 

Swapnil Ghike commented on KAFKA-1135:
--

Thanks for catching this David! Jun, it seems that the diff in the reviewboard 
and what got attached to this JIRA is different. Can you please revert commit 
9b0776d157afd9eacddb84a99f2420fa9c0d505b, download the diff from the 
reviewboard and commit it?

 Code cleanup - use Json.encode() to write json data to zk
 -

 Key: KAFKA-1135
 URL: https://issues.apache.org/jira/browse/KAFKA-1135
 Project: Kafka
  Issue Type: Bug
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Fix For: 0.8.1

 Attachments: KAFKA-1135.patch, KAFKA-1135_2013-11-18_19:17:54.patch, 
 KAFKA-1135_2013-11-18_19:20:58.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (KAFKA-1135) Code cleanup - use Json.encode() to write json data to zk

2013-11-25 Thread Swapnil Ghike (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13832040#comment-13832040
 ] 

Swapnil Ghike commented on KAFKA-1135:
--

[~jjkoshy], does the above issue look similar to KAFKA-1142? 

 Code cleanup - use Json.encode() to write json data to zk
 -

 Key: KAFKA-1135
 URL: https://issues.apache.org/jira/browse/KAFKA-1135
 Project: Kafka
  Issue Type: Bug
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Fix For: 0.8.1

 Attachments: KAFKA-1135.patch, KAFKA-1135_2013-11-18_19:17:54.patch, 
 KAFKA-1135_2013-11-18_19:20:58.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (KAFKA-1117) tool for checking the consistency among replicas

2013-11-20 Thread Swapnil Ghike (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13828146#comment-13828146
 ] 

Swapnil Ghike commented on KAFKA-1117:
--

Hey Jun, after committing this patch, builds with scala 2.10.* are breaking, 
could you please take a look:

[error] 
/home/sghike/kafka-server/kafka-server_trunk/kafka/core/src/main/scala/kafka/tools/ReplicaVerificationTool.scala:364:
 ambiguous reference to overloaded definition,
[error] both constructor FetchResponsePartitionData in class 
FetchResponsePartitionData of type (messages: 
kafka.message.MessageSet)kafka.api.FetchResponsePartitionData
[error] and  constructor FetchResponsePartitionData in class 
FetchResponsePartitionData of type (error: Short, hw: Long, messages: 
kafka.message.MessageSet)kafka.api.FetchResponsePartitionData
[error] match argument types (messages: kafka.message.ByteBufferMessageSet) and 
expected result type kafka.api.FetchResponsePartitionData
[error] replicaBuffer.addFetchedData(topicAndPartition, 
sourceBroker.id, new FetchResponsePartitionData(messages = MessageSet.Empty))
[error] 

 tool for checking the consistency among replicas
 

 Key: KAFKA-1117
 URL: https://issues.apache.org/jira/browse/KAFKA-1117
 Project: Kafka
  Issue Type: New Feature
  Components: core
Affects Versions: 0.8.1
Reporter: Jun Rao
Assignee: Jun Rao
 Fix For: 0.8.1

 Attachments: KAFKA-1117.patch, KAFKA-1117_2013-11-11_08:44:25.patch, 
 KAFKA-1117_2013-11-12_08:34:53.patch, KAFKA-1117_2013-11-14_08:24:41.patch, 
 KAFKA-1117_2013-11-18_09:58:23.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (KAFKA-1135) Code cleanup - use Json.encode() to write json data to zk

2013-11-18 Thread Swapnil Ghike (JIRA)
Swapnil Ghike created KAFKA-1135:


 Summary: Code cleanup - use Json.encode() to write json data to zk
 Key: KAFKA-1135
 URL: https://issues.apache.org/jira/browse/KAFKA-1135
 Project: Kafka
  Issue Type: Bug
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Fix For: 0.8.1






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (KAFKA-1135) Code cleanup - use Json.encode() to write json data to zk

2013-11-18 Thread Swapnil Ghike (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13826116#comment-13826116
 ] 

Swapnil Ghike commented on KAFKA-1135:
--

Created reviewboard https://reviews.apache.org/r/15665/
 against branch origin/trunk

 Code cleanup - use Json.encode() to write json data to zk
 -

 Key: KAFKA-1135
 URL: https://issues.apache.org/jira/browse/KAFKA-1135
 Project: Kafka
  Issue Type: Bug
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Fix For: 0.8.1

 Attachments: KAFKA-1135.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (KAFKA-1135) Code cleanup - use Json.encode() to write json data to zk

2013-11-18 Thread Swapnil Ghike (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Ghike updated KAFKA-1135:
-

Attachment: KAFKA-1135.patch

 Code cleanup - use Json.encode() to write json data to zk
 -

 Key: KAFKA-1135
 URL: https://issues.apache.org/jira/browse/KAFKA-1135
 Project: Kafka
  Issue Type: Bug
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Fix For: 0.8.1

 Attachments: KAFKA-1135.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (KAFKA-1135) Code cleanup - use Json.encode() to write json data to zk

2013-11-18 Thread Swapnil Ghike (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Ghike updated KAFKA-1135:
-

Attachment: KAFKA-1135_2013-11-18_19:17:54.patch

 Code cleanup - use Json.encode() to write json data to zk
 -

 Key: KAFKA-1135
 URL: https://issues.apache.org/jira/browse/KAFKA-1135
 Project: Kafka
  Issue Type: Bug
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Fix For: 0.8.1

 Attachments: KAFKA-1135.patch, KAFKA-1135_2013-11-18_19:17:54.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (KAFKA-1135) Code cleanup - use Json.encode() to write json data to zk

2013-11-18 Thread Swapnil Ghike (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13826128#comment-13826128
 ] 

Swapnil Ghike commented on KAFKA-1135:
--

Updated reviewboard https://reviews.apache.org/r/15665/
 against branch origin/trunk

 Code cleanup - use Json.encode() to write json data to zk
 -

 Key: KAFKA-1135
 URL: https://issues.apache.org/jira/browse/KAFKA-1135
 Project: Kafka
  Issue Type: Bug
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Fix For: 0.8.1

 Attachments: KAFKA-1135.patch, KAFKA-1135_2013-11-18_19:17:54.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (KAFKA-1135) Code cleanup - use Json.encode() to write json data to zk

2013-11-18 Thread Swapnil Ghike (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Ghike updated KAFKA-1135:
-

Attachment: KAFKA-1135_2013-11-18_19:20:58.patch

 Code cleanup - use Json.encode() to write json data to zk
 -

 Key: KAFKA-1135
 URL: https://issues.apache.org/jira/browse/KAFKA-1135
 Project: Kafka
  Issue Type: Bug
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Fix For: 0.8.1

 Attachments: KAFKA-1135.patch, KAFKA-1135_2013-11-18_19:17:54.patch, 
 KAFKA-1135_2013-11-18_19:20:58.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (KAFKA-1135) Code cleanup - use Json.encode() to write json data to zk

2013-11-18 Thread Swapnil Ghike (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13826133#comment-13826133
 ] 

Swapnil Ghike commented on KAFKA-1135:
--

Updated reviewboard https://reviews.apache.org/r/15665/
 against branch origin/trunk

 Code cleanup - use Json.encode() to write json data to zk
 -

 Key: KAFKA-1135
 URL: https://issues.apache.org/jira/browse/KAFKA-1135
 Project: Kafka
  Issue Type: Bug
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Fix For: 0.8.1

 Attachments: KAFKA-1135.patch, KAFKA-1135_2013-11-18_19:17:54.patch, 
 KAFKA-1135_2013-11-18_19:20:58.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (KAFKA-1121) DumpLogSegments tool should print absolute file name to report inconsistencies

2013-11-05 Thread Swapnil Ghike (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Ghike updated KAFKA-1121:
-

Fix Version/s: 0.8.1

 DumpLogSegments tool should print absolute file name to report inconsistencies
 --

 Key: KAFKA-1121
 URL: https://issues.apache.org/jira/browse/KAFKA-1121
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Fix For: 0.8.1


 Normally, the user would know where the index file lies. But in case of a 
 script that continuously checks the index files for consistency, it will help 
 to have the absolute file path printed in the output.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (KAFKA-1121) DumpLogSegments tool should print absolute file name to report inconsistencies

2013-11-05 Thread Swapnil Ghike (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Ghike updated KAFKA-1121:
-

Attachment: KAFKA-1121.patch

 DumpLogSegments tool should print absolute file name to report inconsistencies
 --

 Key: KAFKA-1121
 URL: https://issues.apache.org/jira/browse/KAFKA-1121
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Fix For: 0.8.1

 Attachments: KAFKA-1121.patch


 Normally, the user would know where the index file lies. But in case of a 
 script that continuously checks the index files for consistency, it will help 
 to have the absolute file path printed in the output.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (KAFKA-1100) metrics shouldn't have generation/timestamp specific names

2013-10-23 Thread Swapnil Ghike (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13802635#comment-13802635
 ] 

Swapnil Ghike commented on KAFKA-1100:
--

Hi Jason, at LinkedIn, we use wildcards/regexes to create graphs from such 
mbeans. Would you be able to do something similar?

 metrics shouldn't have generation/timestamp specific names
 --

 Key: KAFKA-1100
 URL: https://issues.apache.org/jira/browse/KAFKA-1100
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Rosenberg

 I've noticed that there are several metrics that seem useful for monitoring 
 overtime, but which contain generational timestamps in the metric name.
 We are using yammer metrics libraries to send metrics data in a background 
 thread every 10 seconds (to kafka actually), and then they eventually end up 
 in a metrics database (graphite, opentsdb).  The metrics then get graphed via 
 UI, and we can see metrics going way back, etc.
 Unfortunately, many of the metrics coming from kafka seem to have metric 
 names that change any time the server or consumer is restarted, which makes 
 it hard to easily create graphs over long periods of time (spanning app 
 restarts).
 For example:
 names like: 
 kafka.consumer.FetchRequestAndResponseMetricssquare-1371718712833-e9bb4d10-0-508818741-AllBrokersFetchRequestRateAndTimeMs
 or: 
 kafka.consumer.ZookeeperConsumerConnector...topicName.square-1373476779391-78aa2e83-0-FetchQueueSize
 In our staging environment, we have our servers on regular auto-deploy cycles 
 (they restart every few hours).  So just not longitudinally usable to have 
 metric names constantly changing like this.
 Is there something that can easily be done?  Is it really necessary to have 
 so much cryptic info in the metric name?



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (KAFKA-1100) metrics shouldn't have generation/timestamp specific names

2013-10-23 Thread Swapnil Ghike (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13803660#comment-13803660
 ] 

Swapnil Ghike commented on KAFKA-1100:
--

That makes sense Joel, we could also use the clientId to differentiate between 
two consumerConnectors that start up on the same host with the same group.

 metrics shouldn't have generation/timestamp specific names
 --

 Key: KAFKA-1100
 URL: https://issues.apache.org/jira/browse/KAFKA-1100
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Rosenberg

 I've noticed that there are several metrics that seem useful for monitoring 
 overtime, but which contain generational timestamps in the metric name.
 We are using yammer metrics libraries to send metrics data in a background 
 thread every 10 seconds (to kafka actually), and then they eventually end up 
 in a metrics database (graphite, opentsdb).  The metrics then get graphed via 
 UI, and we can see metrics going way back, etc.
 Unfortunately, many of the metrics coming from kafka seem to have metric 
 names that change any time the server or consumer is restarted, which makes 
 it hard to easily create graphs over long periods of time (spanning app 
 restarts).
 For example:
 names like: 
 kafka.consumer.FetchRequestAndResponseMetricssquare-1371718712833-e9bb4d10-0-508818741-AllBrokersFetchRequestRateAndTimeMs
 or: 
 kafka.consumer.ZookeeperConsumerConnector...topicName.square-1373476779391-78aa2e83-0-FetchQueueSize
 In our staging environment, we have our servers on regular auto-deploy cycles 
 (they restart every few hours).  So just not longitudinally usable to have 
 metric names constantly changing like this.
 Is there something that can easily be done?  Is it really necessary to have 
 so much cryptic info in the metric name?



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (KAFKA-1093) Log.getOffsetsBefore(t, …) does not return the last confirmed offset before t

2013-10-19 Thread Swapnil Ghike (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Ghike updated KAFKA-1093:
-

Affects Version/s: 0.8

 Log.getOffsetsBefore(t, …) does not return the last confirmed offset before t
 -

 Key: KAFKA-1093
 URL: https://issues.apache.org/jira/browse/KAFKA-1093
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike

 Let's say there are three log segments s1, s2, s3.
 In Log.getoffsetsBefore(t, …), the offsetTimeArray will look like - 
 [(s1.start, s1.lastModified), (s2.start, s2.lastModified), (s3.start, 
 s3.lastModified), (logEndOffset, currentTimeMs)].
 Let's say s2.lastModified  t  s3.lastModified. getOffsetsBefore(t, 1) will 
 return Seq(s2.start).
 However, we already know s3.firstAppendTime. So, if s3.firstAppendTime  t  
 s3.lastModified, we should rather return s3.start. 
 This also resolves another bug wherein the log has only one segment and 
 getOffsetsBefore() returns an empty Seq if the timestamp provided is less 
 than the lastModified of the only segment. We should rather return the 
 startOffset of the segment if the timestamp is greater than the 
 firstAppendTime of the segment.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (KAFKA-1093) Log.getOffsetsBefore(t, …) does not return the last confirmed offset before t

2013-10-19 Thread Swapnil Ghike (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Ghike updated KAFKA-1093:
-

Description: 
Let's say there are three log segments s1, s2, s3.

In Log.getoffsetsBefore(t, …), the offsetTimeArray will look like - [(s1.start, 
s1.lastModified), (s2.start, s2.lastModified), (s3.start, s3.lastModified), 
(logEndOffset, currentTimeMs)].

Let's say s2.lastModified  t  s3.lastModified. getOffsetsBefore(t, 1) will 
return Seq(s2.start).

However, we already know s3.firstAppendTime (s3.created in trunk). So, if 
s3.firstAppendTime  t  s3.lastModified, we should rather return s3.start. 

This also resolves another bug wherein the log has only one segment and 
getOffsetsBefore() returns an empty Seq if the timestamp provided is less than 
the lastModified of the only segment. We should rather return the startOffset 
of the segment if the timestamp is greater than the firstAppendTime of the 
segment.

  was:
Let's say there are three log segments s1, s2, s3.

In Log.getoffsetsBefore(t, …), the offsetTimeArray will look like - [(s1.start, 
s1.lastModified), (s2.start, s2.lastModified), (s3.start, s3.lastModified), 
(logEndOffset, currentTimeMs)].

Let's say s2.lastModified  t  s3.lastModified. getOffsetsBefore(t, 1) will 
return Seq(s2.start).

However, we already know s3.firstAppendTime. So, if s3.firstAppendTime  t  
s3.lastModified, we should rather return s3.start. 

This also resolves another bug wherein the log has only one segment and 
getOffsetsBefore() returns an empty Seq if the timestamp provided is less than 
the lastModified of the only segment. We should rather return the startOffset 
of the segment if the timestamp is greater than the firstAppendTime of the 
segment.


 Log.getOffsetsBefore(t, …) does not return the last confirmed offset before t
 -

 Key: KAFKA-1093
 URL: https://issues.apache.org/jira/browse/KAFKA-1093
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike

 Let's say there are three log segments s1, s2, s3.
 In Log.getoffsetsBefore(t, …), the offsetTimeArray will look like - 
 [(s1.start, s1.lastModified), (s2.start, s2.lastModified), (s3.start, 
 s3.lastModified), (logEndOffset, currentTimeMs)].
 Let's say s2.lastModified  t  s3.lastModified. getOffsetsBefore(t, 1) will 
 return Seq(s2.start).
 However, we already know s3.firstAppendTime (s3.created in trunk). So, if 
 s3.firstAppendTime  t  s3.lastModified, we should rather return s3.start. 
 This also resolves another bug wherein the log has only one segment and 
 getOffsetsBefore() returns an empty Seq if the timestamp provided is less 
 than the lastModified of the only segment. We should rather return the 
 startOffset of the segment if the timestamp is greater than the 
 firstAppendTime of the segment.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (KAFKA-1093) Log.getOffsetsBefore(t, …) does not return the last confirmed offset before t

2013-10-19 Thread Swapnil Ghike (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13800013#comment-13800013
 ] 

Swapnil Ghike commented on KAFKA-1093:
--

Created reviewboard 

 Log.getOffsetsBefore(t, …) does not return the last confirmed offset before t
 -

 Key: KAFKA-1093
 URL: https://issues.apache.org/jira/browse/KAFKA-1093
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Attachments: KAFKA-1093.patch


 Let's say there are three log segments s1, s2, s3.
 In Log.getoffsetsBefore(t, …), the offsetTimeArray will look like - 
 [(s1.start, s1.lastModified), (s2.start, s2.lastModified), (s3.start, 
 s3.lastModified), (logEndOffset, currentTimeMs)].
 Let's say s2.lastModified  t  s3.lastModified. getOffsetsBefore(t, 1) will 
 return Seq(s2.start).
 However, we already know s3.firstAppendTime (s3.created in trunk). So, if 
 s3.firstAppendTime  t  s3.lastModified, we should rather return s3.start. 
 This also resolves another bug wherein the log has only one segment and 
 getOffsetsBefore() returns an empty Seq if the timestamp provided is less 
 than the lastModified of the only segment. We should rather return the 
 startOffset of the segment if the timestamp is greater than the 
 firstAppendTime of the segment.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (KAFKA-1093) Log.getOffsetsBefore(t, …) does not return the last confirmed offset before t

2013-10-19 Thread Swapnil Ghike (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Ghike updated KAFKA-1093:
-

Attachment: KAFKA-1093.patch

 Log.getOffsetsBefore(t, …) does not return the last confirmed offset before t
 -

 Key: KAFKA-1093
 URL: https://issues.apache.org/jira/browse/KAFKA-1093
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Attachments: KAFKA-1093.patch


 Let's say there are three log segments s1, s2, s3.
 In Log.getoffsetsBefore(t, …), the offsetTimeArray will look like - 
 [(s1.start, s1.lastModified), (s2.start, s2.lastModified), (s3.start, 
 s3.lastModified), (logEndOffset, currentTimeMs)].
 Let's say s2.lastModified  t  s3.lastModified. getOffsetsBefore(t, 1) will 
 return Seq(s2.start).
 However, we already know s3.firstAppendTime (s3.created in trunk). So, if 
 s3.firstAppendTime  t  s3.lastModified, we should rather return s3.start. 
 This also resolves another bug wherein the log has only one segment and 
 getOffsetsBefore() returns an empty Seq if the timestamp provided is less 
 than the lastModified of the only segment. We should rather return the 
 startOffset of the segment if the timestamp is greater than the 
 firstAppendTime of the segment.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (KAFKA-1093) Log.getOffsetsBefore(t, …) does not return the last confirmed offset before t

2013-10-19 Thread Swapnil Ghike (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13800018#comment-13800018
 ] 

Swapnil Ghike commented on KAFKA-1093:
--

Created reviewboard https://reviews.apache.org/r/14771/


 Log.getOffsetsBefore(t, …) does not return the last confirmed offset before t
 -

 Key: KAFKA-1093
 URL: https://issues.apache.org/jira/browse/KAFKA-1093
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Attachments: KAFKA-1093.patch


 Let's say there are three log segments s1, s2, s3.
 In Log.getoffsetsBefore(t, …), the offsetTimeArray will look like - 
 [(s1.start, s1.lastModified), (s2.start, s2.lastModified), (s3.start, 
 s3.lastModified), (logEndOffset, currentTimeMs)].
 Let's say s2.lastModified  t  s3.lastModified. getOffsetsBefore(t, 1) will 
 return Seq(s2.start).
 However, we already know s3.firstAppendTime (s3.created in trunk). So, if 
 s3.firstAppendTime  t  s3.lastModified, we should rather return s3.start. 
 This also resolves another bug wherein the log has only one segment and 
 getOffsetsBefore() returns an empty Seq if the timestamp provided is less 
 than the lastModified of the only segment. We should rather return the 
 startOffset of the segment if the timestamp is greater than the 
 firstAppendTime of the segment.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (KAFKA-1093) Log.getOffsetsBefore(t, …) does not return the last confirmed offset before t

2013-10-19 Thread Swapnil Ghike (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13800021#comment-13800021
 ] 

Swapnil Ghike commented on KAFKA-1093:
--

Created reviewboard https://reviews.apache.org/r/14772/


 Log.getOffsetsBefore(t, …) does not return the last confirmed offset before t
 -

 Key: KAFKA-1093
 URL: https://issues.apache.org/jira/browse/KAFKA-1093
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Attachments: KAFKA-1093.patch


 Let's say there are three log segments s1, s2, s3.
 In Log.getoffsetsBefore(t, …), the offsetTimeArray will look like - 
 [(s1.start, s1.lastModified), (s2.start, s2.lastModified), (s3.start, 
 s3.lastModified), (logEndOffset, currentTimeMs)].
 Let's say s2.lastModified  t  s3.lastModified. getOffsetsBefore(t, 1) will 
 return Seq(s2.start).
 However, we already know s3.firstAppendTime (s3.created in trunk). So, if 
 s3.firstAppendTime  t  s3.lastModified, we should rather return s3.start. 
 This also resolves another bug wherein the log has only one segment and 
 getOffsetsBefore() returns an empty Seq if the timestamp provided is less 
 than the lastModified of the only segment. We should rather return the 
 startOffset of the segment if the timestamp is greater than the 
 firstAppendTime of the segment.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (KAFKA-1093) Log.getOffsetsBefore(t, …) does not return the last confirmed offset before t

2013-10-19 Thread Swapnil Ghike (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Ghike updated KAFKA-1093:
-

Attachment: KAFKA-1093.patch

 Log.getOffsetsBefore(t, …) does not return the last confirmed offset before t
 -

 Key: KAFKA-1093
 URL: https://issues.apache.org/jira/browse/KAFKA-1093
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Attachments: KAFKA-1093.patch


 Let's say there are three log segments s1, s2, s3.
 In Log.getoffsetsBefore(t, …), the offsetTimeArray will look like - 
 [(s1.start, s1.lastModified), (s2.start, s2.lastModified), (s3.start, 
 s3.lastModified), (logEndOffset, currentTimeMs)].
 Let's say s2.lastModified  t  s3.lastModified. getOffsetsBefore(t, 1) will 
 return Seq(s2.start).
 However, we already know s3.firstAppendTime (s3.created in trunk). So, if 
 s3.firstAppendTime  t  s3.lastModified, we should rather return s3.start. 
 This also resolves another bug wherein the log has only one segment and 
 getOffsetsBefore() returns an empty Seq if the timestamp provided is less 
 than the lastModified of the only segment. We should rather return the 
 startOffset of the segment if the timestamp is greater than the 
 firstAppendTime of the segment.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Issue Comment Deleted] (KAFKA-1093) Log.getOffsetsBefore(t, …) does not return the last confirmed offset before t

2013-10-19 Thread Swapnil Ghike (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Ghike updated KAFKA-1093:
-

Comment: was deleted

(was: Created reviewboard https://reviews.apache.org/r/14772/
)

 Log.getOffsetsBefore(t, …) does not return the last confirmed offset before t
 -

 Key: KAFKA-1093
 URL: https://issues.apache.org/jira/browse/KAFKA-1093
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Attachments: KAFKA-1093.patch


 Let's say there are three log segments s1, s2, s3.
 In Log.getoffsetsBefore(t, …), the offsetTimeArray will look like - 
 [(s1.start, s1.lastModified), (s2.start, s2.lastModified), (s3.start, 
 s3.lastModified), (logEndOffset, currentTimeMs)].
 Let's say s2.lastModified  t  s3.lastModified. getOffsetsBefore(t, 1) will 
 return Seq(s2.start).
 However, we already know s3.firstAppendTime (s3.created in trunk). So, if 
 s3.firstAppendTime  t  s3.lastModified, we should rather return s3.start. 
 This also resolves another bug wherein the log has only one segment and 
 getOffsetsBefore() returns an empty Seq if the timestamp provided is less 
 than the lastModified of the only segment. We should rather return the 
 startOffset of the segment if the timestamp is greater than the 
 firstAppendTime of the segment.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (KAFKA-1093) Log.getOffsetsBefore(t, …) does not return the last confirmed offset before t

2013-10-19 Thread Swapnil Ghike (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Ghike updated KAFKA-1093:
-

Attachment: (was: KAFKA-1093.patch)

 Log.getOffsetsBefore(t, …) does not return the last confirmed offset before t
 -

 Key: KAFKA-1093
 URL: https://issues.apache.org/jira/browse/KAFKA-1093
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Attachments: KAFKA-1093.patch


 Let's say there are three log segments s1, s2, s3.
 In Log.getoffsetsBefore(t, …), the offsetTimeArray will look like - 
 [(s1.start, s1.lastModified), (s2.start, s2.lastModified), (s3.start, 
 s3.lastModified), (logEndOffset, currentTimeMs)].
 Let's say s2.lastModified  t  s3.lastModified. getOffsetsBefore(t, 1) will 
 return Seq(s2.start).
 However, we already know s3.firstAppendTime (s3.created in trunk). So, if 
 s3.firstAppendTime  t  s3.lastModified, we should rather return s3.start. 
 This also resolves another bug wherein the log has only one segment and 
 getOffsetsBefore() returns an empty Seq if the timestamp provided is less 
 than the lastModified of the only segment. We should rather return the 
 startOffset of the segment if the timestamp is greater than the 
 firstAppendTime of the segment.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (KAFKA-1094) Configure reviewboard url in kafka-patch-review tool

2013-10-19 Thread Swapnil Ghike (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13800023#comment-13800023
 ] 

Swapnil Ghike commented on KAFKA-1094:
--

Created reviewboard https://reviews.apache.org/r/14773/


 Configure reviewboard url in kafka-patch-review tool
 

 Key: KAFKA-1094
 URL: https://issues.apache.org/jira/browse/KAFKA-1094
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Attachments: KAFKA-1094.patch


 If someone forgets to configure review board, then the tool uploads a patch 
 without creating an RB.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (KAFKA-1094) Configure reviewboard url in kafka-patch-review tool

2013-10-19 Thread Swapnil Ghike (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Ghike updated KAFKA-1094:
-

Attachment: KAFKA-1094.patch

 Configure reviewboard url in kafka-patch-review tool
 

 Key: KAFKA-1094
 URL: https://issues.apache.org/jira/browse/KAFKA-1094
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Attachments: KAFKA-1094.patch


 If someone forgets to configure review board, then the tool uploads a patch 
 without creating an RB.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (KAFKA-1094) Configure reviewboard url in kafka-patch-review tool

2013-10-19 Thread Swapnil Ghike (JIRA)
Swapnil Ghike created KAFKA-1094:


 Summary: Configure reviewboard url in kafka-patch-review tool
 Key: KAFKA-1094
 URL: https://issues.apache.org/jira/browse/KAFKA-1094
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Attachments: KAFKA-1094.patch

If someone forgets to configure review board, then the tool uploads a patch 
without creating an RB.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (KAFKA-1087) Empty topic list causes consumer to fetch metadata of all topics

2013-10-15 Thread Swapnil Ghike (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13794966#comment-13794966
 ] 

Swapnil Ghike commented on KAFKA-1087:
--

Same patch should apply fine to trunk.

 Empty topic list causes consumer to fetch metadata of all topics
 

 Key: KAFKA-1087
 URL: https://issues.apache.org/jira/browse/KAFKA-1087
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Attachments: KAFKA-1087.patch


 The ClientUtils fetches metadata for all topics if the topic set is empty. 
 If the topic list of a consumer is empty, the following happens if a 
 rebalance is triggered:
 - The fetcher is restarted, fetcher.startConnections() starts a 
 LeaderFinderThread
 - LeaderFinderThread waits on a condition
 - fetcher.startConnections() signals the aforementioned condition
 - LeaderFinderThread obtains metadata for all topics since the topic list is 
 empty.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (KAFKA-1087) Empty topic list causes consumer to fetch metadata of all topics

2013-10-14 Thread Swapnil Ghike (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Ghike updated KAFKA-1087:
-

Affects Version/s: 0.8

 Empty topic list causes consumer to fetch metadata of all topics
 

 Key: KAFKA-1087
 URL: https://issues.apache.org/jira/browse/KAFKA-1087
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike

 The ClientUtils fetches metadata for all topics if the topic set is empty. 
 If the topic list of a consumer is empty, the following happens if a 
 rebalance is triggered:
 - LeaderFinderThread waits on a condition
 - The fetcher is restarted and it signals the aforementioned condition
 - LeaderFinderThread obtains metadata for all topics since the topic list is 
 empty.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (KAFKA-1087) Empty topic list causes consumer to fetch metadata of all topics

2013-10-14 Thread Swapnil Ghike (JIRA)
Swapnil Ghike created KAFKA-1087:


 Summary: Empty topic list causes consumer to fetch metadata of all 
topics
 Key: KAFKA-1087
 URL: https://issues.apache.org/jira/browse/KAFKA-1087
 Project: Kafka
  Issue Type: Bug
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike


The ClientUtils fetches metadata for all topics if the topic set is empty. 

If the topic list of a consumer is empty, the following happens if a rebalance 
is triggered:
- LeaderFinderThread waits on a condition
- The fetcher is restarted and it signals the aforementioned condition
- LeaderFinderThread obtains metadata for all topics since the topic list is 
empty.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (KAFKA-1087) Empty topic list causes consumer to fetch metadata of all topics

2013-10-14 Thread Swapnil Ghike (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Ghike updated KAFKA-1087:
-

Attachment: KAFKA-1087.patch

Unit tests pass.

 Empty topic list causes consumer to fetch metadata of all topics
 

 Key: KAFKA-1087
 URL: https://issues.apache.org/jira/browse/KAFKA-1087
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Attachments: KAFKA-1087.patch


 The ClientUtils fetches metadata for all topics if the topic set is empty. 
 If the topic list of a consumer is empty, the following happens if a 
 rebalance is triggered:
 - LeaderFinderThread waits on a condition
 - The fetcher is restarted and it signals the aforementioned condition
 - LeaderFinderThread obtains metadata for all topics since the topic list is 
 empty.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (KAFKA-1087) Empty topic list causes consumer to fetch metadata of all topics

2013-10-14 Thread Swapnil Ghike (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Ghike updated KAFKA-1087:
-

Description: 
The ClientUtils fetches metadata for all topics if the topic set is empty. 

If the topic list of a consumer is empty, the following happens if a rebalance 
is triggered:
- The fetcher is restarted, fetcher.startConnections() starts a 
LeaderFinderThread
- LeaderFinderThread waits on a condition
- fetcher.startConnections() signals the aforementioned condition
- LeaderFinderThread obtains metadata for all topics since the topic list is 
empty.

  was:
The ClientUtils fetches metadata for all topics if the topic set is empty. 

If the topic list of a consumer is empty, the following happens if a rebalance 
is triggered:
- The fetcher is restarted, it starts a LeaderFinderThread
- LeaderFinderThread waits on a condition
- fetcher.startConnections() signals the aforementioned condition
- LeaderFinderThread obtains metadata for all topics since the topic list is 
empty.


 Empty topic list causes consumer to fetch metadata of all topics
 

 Key: KAFKA-1087
 URL: https://issues.apache.org/jira/browse/KAFKA-1087
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Attachments: KAFKA-1087.patch


 The ClientUtils fetches metadata for all topics if the topic set is empty. 
 If the topic list of a consumer is empty, the following happens if a 
 rebalance is triggered:
 - The fetcher is restarted, fetcher.startConnections() starts a 
 LeaderFinderThread
 - LeaderFinderThread waits on a condition
 - fetcher.startConnections() signals the aforementioned condition
 - LeaderFinderThread obtains metadata for all topics since the topic list is 
 empty.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (KAFKA-1087) Empty topic list causes consumer to fetch metadata of all topics

2013-10-14 Thread Swapnil Ghike (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Ghike updated KAFKA-1087:
-

Description: 
The ClientUtils fetches metadata for all topics if the topic set is empty. 

If the topic list of a consumer is empty, the following happens if a rebalance 
is triggered:
- The fetcher is restarted, it starts a LeaderFinderThread
- LeaderFinderThread waits on a condition
- fetcher.startConnections() signals the aforementioned condition
- LeaderFinderThread obtains metadata for all topics since the topic list is 
empty.

  was:
The ClientUtils fetches metadata for all topics if the topic set is empty. 

If the topic list of a consumer is empty, the following happens if a rebalance 
is triggered:
- LeaderFinderThread waits on a condition
- The fetcher is restarted and it signals the aforementioned condition
- LeaderFinderThread obtains metadata for all topics since the topic list is 
empty.


 Empty topic list causes consumer to fetch metadata of all topics
 

 Key: KAFKA-1087
 URL: https://issues.apache.org/jira/browse/KAFKA-1087
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Attachments: KAFKA-1087.patch


 The ClientUtils fetches metadata for all topics if the topic set is empty. 
 If the topic list of a consumer is empty, the following happens if a 
 rebalance is triggered:
 - The fetcher is restarted, it starts a LeaderFinderThread
 - LeaderFinderThread waits on a condition
 - fetcher.startConnections() signals the aforementioned condition
 - LeaderFinderThread obtains metadata for all topics since the topic list is 
 empty.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (KAFKA-1030) Addition of partitions requires bouncing all the consumers of that topic

2013-09-17 Thread Swapnil Ghike (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13769884#comment-13769884
 ] 

Swapnil Ghike commented on KAFKA-1030:
--

+1 that Guozhang, thanks for running the tests.

 Addition of partitions requires bouncing all the consumers of that topic
 

 Key: KAFKA-1030
 URL: https://issues.apache.org/jira/browse/KAFKA-1030
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8
Reporter: Swapnil Ghike
Assignee: Guozhang Wang
Priority: Blocker
 Fix For: 0.8

 Attachments: KAFKA-1030-v1.patch


 Consumer may not notice new partitions because the propagation of the 
 metadata to servers can be delayed. 
 Options:
 1. As Jun suggested on KAFKA-956, the easiest fix would be to read the new 
 partition data from zookeeper instead of a kafka server.
 2. Run a fetch metadata loop in consumer, and set auto.offset.reset to 
 smallest once the consumer has started.
 1 sounds easier to do. If 1 causes long delays in reading all partitions at 
 the start of every rebalance, 2 may be worth considering.
  
 The same issue affects MirrorMaker when new topics are created, MirrorMaker 
 may not notice all partitions of the new topics until the next rebalance.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (KAFKA-1003) ConsumerFetcherManager should pass clientId as metricsPrefix to AbstractFetcherManager

2013-09-16 Thread Swapnil Ghike (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Ghike updated KAFKA-1003:
-

Attachment: KAFKA-1003.patch

 ConsumerFetcherManager should pass clientId as metricsPrefix to 
 AbstractFetcherManager
 --

 Key: KAFKA-1003
 URL: https://issues.apache.org/jira/browse/KAFKA-1003
 Project: Kafka
  Issue Type: Bug
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Fix For: 0.8

 Attachments: kafka-1003.patch


 For consistency. We use clientId in the metric names elsewhere on clients.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (KAFKA-1053) Kafka patch review tool

2013-09-16 Thread Swapnil Ghike (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13768499#comment-13768499
 ] 

Swapnil Ghike commented on KAFKA-1053:
--

After installing arg_parse and using origin/0.8 instead of 0.8 as the branch 
name, it worked like a charm!

 Kafka patch review tool
 ---

 Key: KAFKA-1053
 URL: https://issues.apache.org/jira/browse/KAFKA-1053
 Project: Kafka
  Issue Type: New Feature
  Components: tools
Reporter: Neha Narkhede
Assignee: Neha Narkhede
 Attachments: KAFKA-1053-2013-09-15_09:40:04.patch, 
 KAFKA-1053_2013-09-15_20:28:01.patch, KAFKA-1053-followup2.patch, 
 KAFKA-1053-followup.patch, KAFKA-1053-v1.patch, KAFKA-1053-v1.patch, 
 KAFKA-1053-v1.patch, KAFKA-1053-v2.patch, KAFKA-1053-v3.patch


 Created a new patch review tool that will integrate JIRA and reviewboard - 
 https://cwiki.apache.org/confluence/display/KAFKA/Kafka+patch+review+tool

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (KAFKA-1003) ConsumerFetcherManager should pass clientId as metricsPrefix to AbstractFetcherManager

2013-09-16 Thread Swapnil Ghike (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Ghike updated KAFKA-1003:
-

Attachment: (was: KAFKA-1003.patch)

 ConsumerFetcherManager should pass clientId as metricsPrefix to 
 AbstractFetcherManager
 --

 Key: KAFKA-1003
 URL: https://issues.apache.org/jira/browse/KAFKA-1003
 Project: Kafka
  Issue Type: Bug
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Fix For: 0.8

 Attachments: kafka-1003.patch


 For consistency. We use clientId in the metric names elsewhere on clients.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (KAFKA-1003) ConsumerFetcherManager should pass clientId as metricsPrefix to AbstractFetcherManager

2013-09-16 Thread Swapnil Ghike (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Ghike updated KAFKA-1003:
-

Attachment: (was: KAFKA-1003.patch)

 ConsumerFetcherManager should pass clientId as metricsPrefix to 
 AbstractFetcherManager
 --

 Key: KAFKA-1003
 URL: https://issues.apache.org/jira/browse/KAFKA-1003
 Project: Kafka
  Issue Type: Bug
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Fix For: 0.8

 Attachments: kafka-1003.patch


 For consistency. We use clientId in the metric names elsewhere on clients.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (KAFKA-1003) ConsumerFetcherManager should pass clientId as metricsPrefix to AbstractFetcherManager

2013-09-16 Thread Swapnil Ghike (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13768496#comment-13768496
 ] 

Swapnil Ghike commented on KAFKA-1003:
--

Created reviewboard https://reviews.apache.org/r/14149/


 ConsumerFetcherManager should pass clientId as metricsPrefix to 
 AbstractFetcherManager
 --

 Key: KAFKA-1003
 URL: https://issues.apache.org/jira/browse/KAFKA-1003
 Project: Kafka
  Issue Type: Bug
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Fix For: 0.8

 Attachments: kafka-1003.patch


 For consistency. We use clientId in the metric names elsewhere on clients.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (KAFKA-1003) ConsumerFetcherManager should pass clientId as metricsPrefix to AbstractFetcherManager

2013-09-16 Thread Swapnil Ghike (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Ghike updated KAFKA-1003:
-

Attachment: (was: KAFKA-1003.patch)

 ConsumerFetcherManager should pass clientId as metricsPrefix to 
 AbstractFetcherManager
 --

 Key: KAFKA-1003
 URL: https://issues.apache.org/jira/browse/KAFKA-1003
 Project: Kafka
  Issue Type: Bug
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Fix For: 0.8

 Attachments: kafka-1003.patch


 For consistency. We use clientId in the metric names elsewhere on clients.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (KAFKA-1003) ConsumerFetcherManager should pass clientId as metricsPrefix to AbstractFetcherManager

2013-09-16 Thread Swapnil Ghike (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Ghike updated KAFKA-1003:
-

Attachment: (was: KAFKA-1003.patch)

 ConsumerFetcherManager should pass clientId as metricsPrefix to 
 AbstractFetcherManager
 --

 Key: KAFKA-1003
 URL: https://issues.apache.org/jira/browse/KAFKA-1003
 Project: Kafka
  Issue Type: Bug
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Fix For: 0.8

 Attachments: kafka-1003.patch


 For consistency. We use clientId in the metric names elsewhere on clients.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (KAFKA-1003) ConsumerFetcherManager should pass clientId as metricsPrefix to AbstractFetcherManager

2013-09-16 Thread Swapnil Ghike (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Ghike updated KAFKA-1003:
-

Attachment: (was: KAFKA-1003.patch)

 ConsumerFetcherManager should pass clientId as metricsPrefix to 
 AbstractFetcherManager
 --

 Key: KAFKA-1003
 URL: https://issues.apache.org/jira/browse/KAFKA-1003
 Project: Kafka
  Issue Type: Bug
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Fix For: 0.8

 Attachments: kafka-1003.patch


 For consistency. We use clientId in the metric names elsewhere on clients.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (KAFKA-1003) ConsumerFetcherManager should pass clientId as metricsPrefix to AbstractFetcherManager

2013-09-16 Thread Swapnil Ghike (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Ghike updated KAFKA-1003:
-

Attachment: (was: KAFKA-1003.patch)

 ConsumerFetcherManager should pass clientId as metricsPrefix to 
 AbstractFetcherManager
 --

 Key: KAFKA-1003
 URL: https://issues.apache.org/jira/browse/KAFKA-1003
 Project: Kafka
  Issue Type: Bug
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Fix For: 0.8

 Attachments: kafka-1003.patch


 For consistency. We use clientId in the metric names elsewhere on clients.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (KAFKA-1003) ConsumerFetcherManager should pass clientId as metricsPrefix to AbstractFetcherManager

2013-09-16 Thread Swapnil Ghike (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Ghike updated KAFKA-1003:
-

Attachment: KAFKA-1003.patch

 ConsumerFetcherManager should pass clientId as metricsPrefix to 
 AbstractFetcherManager
 --

 Key: KAFKA-1003
 URL: https://issues.apache.org/jira/browse/KAFKA-1003
 Project: Kafka
  Issue Type: Bug
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Fix For: 0.8

 Attachments: kafka-1003.patch, KAFKA-1003.patch, KAFKA-1003.patch, 
 KAFKA-1003.patch, KAFKA-1003.patch, KAFKA-1003.patch, KAFKA-1003.patch, 
 KAFKA-1003.patch, KAFKA-1003.patch


 For consistency. We use clientId in the metric names elsewhere on clients.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (KAFKA-1003) ConsumerFetcherManager should pass clientId as metricsPrefix to AbstractFetcherManager

2013-09-16 Thread Swapnil Ghike (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13768495#comment-13768495
 ] 

Swapnil Ghike commented on KAFKA-1003:
--

Created reviewboard 

 ConsumerFetcherManager should pass clientId as metricsPrefix to 
 AbstractFetcherManager
 --

 Key: KAFKA-1003
 URL: https://issues.apache.org/jira/browse/KAFKA-1003
 Project: Kafka
  Issue Type: Bug
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Fix For: 0.8

 Attachments: kafka-1003.patch, KAFKA-1003.patch, KAFKA-1003.patch, 
 KAFKA-1003.patch, KAFKA-1003.patch, KAFKA-1003.patch, KAFKA-1003.patch, 
 KAFKA-1003.patch, KAFKA-1003.patch


 For consistency. We use clientId in the metric names elsewhere on clients.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Issue Comment Deleted] (KAFKA-1003) ConsumerFetcherManager should pass clientId as metricsPrefix to AbstractFetcherManager

2013-09-16 Thread Swapnil Ghike (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Ghike updated KAFKA-1003:
-

Comment: was deleted

(was: Created reviewboard https://reviews.apache.org/r/14149/
)

 ConsumerFetcherManager should pass clientId as metricsPrefix to 
 AbstractFetcherManager
 --

 Key: KAFKA-1003
 URL: https://issues.apache.org/jira/browse/KAFKA-1003
 Project: Kafka
  Issue Type: Bug
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Fix For: 0.8

 Attachments: kafka-1003.patch


 For consistency. We use clientId in the metric names elsewhere on clients.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (KAFKA-1003) ConsumerFetcherManager should pass clientId as metricsPrefix to AbstractFetcherManager

2013-09-16 Thread Swapnil Ghike (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Ghike updated KAFKA-1003:
-

Attachment: KAFKA-1003.patch

 ConsumerFetcherManager should pass clientId as metricsPrefix to 
 AbstractFetcherManager
 --

 Key: KAFKA-1003
 URL: https://issues.apache.org/jira/browse/KAFKA-1003
 Project: Kafka
  Issue Type: Bug
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Fix For: 0.8

 Attachments: kafka-1003.patch, KAFKA-1003.patch, KAFKA-1003.patch, 
 KAFKA-1003.patch, KAFKA-1003.patch, KAFKA-1003.patch, KAFKA-1003.patch, 
 KAFKA-1003.patch, KAFKA-1003.patch


 For consistency. We use clientId in the metric names elsewhere on clients.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (KAFKA-1003) ConsumerFetcherManager should pass clientId as metricsPrefix to AbstractFetcherManager

2013-09-16 Thread Swapnil Ghike (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13768494#comment-13768494
 ] 

Swapnil Ghike commented on KAFKA-1003:
--

Created reviewboard 

 ConsumerFetcherManager should pass clientId as metricsPrefix to 
 AbstractFetcherManager
 --

 Key: KAFKA-1003
 URL: https://issues.apache.org/jira/browse/KAFKA-1003
 Project: Kafka
  Issue Type: Bug
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Fix For: 0.8

 Attachments: kafka-1003.patch, KAFKA-1003.patch, KAFKA-1003.patch, 
 KAFKA-1003.patch, KAFKA-1003.patch, KAFKA-1003.patch, KAFKA-1003.patch, 
 KAFKA-1003.patch, KAFKA-1003.patch


 For consistency. We use clientId in the metric names elsewhere on clients.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (KAFKA-1053) Kafka patch review tool

2013-09-16 Thread Swapnil Ghike (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13768756#comment-13768756
 ] 

Swapnil Ghike commented on KAFKA-1053:
--

Saw new error on RHEL when I tried 'python kafka-patch-review.py -b 
origin/trunk -j KAFKA-42 -r 14081':

Enter authorization information for Web API at reviews.apache.org

Your review request still exists, but the diff is not attached.

Creating diff against origin/trunk and uploading patch to JIRA KAFKA-42
Created a new reviewboard  


 Kafka patch review tool
 ---

 Key: KAFKA-1053
 URL: https://issues.apache.org/jira/browse/KAFKA-1053
 Project: Kafka
  Issue Type: New Feature
  Components: tools
Reporter: Neha Narkhede
Assignee: Neha Narkhede
 Attachments: KAFKA-1053-2013-09-15_09:40:04.patch, 
 KAFKA-1053_2013-09-15_20:28:01.patch, KAFKA-1053-followup2.patch, 
 KAFKA-1053-followup.patch, KAFKA-1053-v1.patch, KAFKA-1053-v1.patch, 
 KAFKA-1053-v1.patch, KAFKA-1053-v2.patch, KAFKA-1053-v3.patch


 Created a new patch review tool that will integrate JIRA and reviewboard - 
 https://cwiki.apache.org/confluence/display/KAFKA/Kafka+patch+review+tool

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (KAFKA-1053) Kafka patch review tool

2013-09-16 Thread Swapnil Ghike (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13768756#comment-13768756
 ] 

Swapnil Ghike edited comment on KAFKA-1053 at 9/16/13 9:02 PM:
---

Saw new issue on RHEL when I tried 'python kafka-patch-review.py -b 
origin/trunk -j KAFKA-42 -r 14081':

Enter authorization information for Web API at reviews.apache.org

Your review request still exists, but the diff is not attached.

Creating diff against origin/trunk and uploading patch to JIRA KAFKA-42
Created a new reviewboard  


It attached the diff, but did not create a RB.

  was (Author: swapnilghike):
Saw new error on RHEL when I tried 'python kafka-patch-review.py -b 
origin/trunk -j KAFKA-42 -r 14081':

Enter authorization information for Web API at reviews.apache.org

Your review request still exists, but the diff is not attached.

Creating diff against origin/trunk and uploading patch to JIRA KAFKA-42
Created a new reviewboard  

  
 Kafka patch review tool
 ---

 Key: KAFKA-1053
 URL: https://issues.apache.org/jira/browse/KAFKA-1053
 Project: Kafka
  Issue Type: New Feature
  Components: tools
Reporter: Neha Narkhede
Assignee: Neha Narkhede
 Attachments: KAFKA-1053-2013-09-15_09:40:04.patch, 
 KAFKA-1053_2013-09-15_20:28:01.patch, KAFKA-1053-followup2.patch, 
 KAFKA-1053-followup.patch, KAFKA-1053-v1.patch, KAFKA-1053-v1.patch, 
 KAFKA-1053-v1.patch, KAFKA-1053-v2.patch, KAFKA-1053-v3.patch


 Created a new patch review tool that will integrate JIRA and reviewboard - 
 https://cwiki.apache.org/confluence/display/KAFKA/Kafka+patch+review+tool

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (KAFKA-1003) ConsumerFetcherManager should pass clientId as metricsPrefix to AbstractFetcherManager

2013-09-16 Thread Swapnil Ghike (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Ghike updated KAFKA-1003:
-

Attachment: KAFKA-1003_2013-09-16_14:13:04.patch

 ConsumerFetcherManager should pass clientId as metricsPrefix to 
 AbstractFetcherManager
 --

 Key: KAFKA-1003
 URL: https://issues.apache.org/jira/browse/KAFKA-1003
 Project: Kafka
  Issue Type: Bug
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Fix For: 0.8

 Attachments: KAFKA-1003_2013-09-16_14:13:04.patch, kafka-1003.patch


 For consistency. We use clientId in the metric names elsewhere on clients.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (KAFKA-1003) ConsumerFetcherManager should pass clientId as metricsPrefix to AbstractFetcherManager

2013-09-16 Thread Swapnil Ghike (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13768768#comment-13768768
 ] 

Swapnil Ghike commented on KAFKA-1003:
--

Updated reviewboard 

 ConsumerFetcherManager should pass clientId as metricsPrefix to 
 AbstractFetcherManager
 --

 Key: KAFKA-1003
 URL: https://issues.apache.org/jira/browse/KAFKA-1003
 Project: Kafka
  Issue Type: Bug
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Fix For: 0.8

 Attachments: KAFKA-1003_2013-09-16_14:13:04.patch, kafka-1003.patch


 For consistency. We use clientId in the metric names elsewhere on clients.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (KAFKA-42) Support rebalancing the partitions with replication

2013-09-16 Thread Swapnil Ghike (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-42?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Ghike updated KAFKA-42:
---

Attachment: KAFKA-42.patch

 Support rebalancing the partitions with replication
 ---

 Key: KAFKA-42
 URL: https://issues.apache.org/jira/browse/KAFKA-42
 Project: Kafka
  Issue Type: Bug
  Components: core
Reporter: Jun Rao
Assignee: Neha Narkhede
Priority: Blocker
  Labels: features
 Fix For: 0.8

 Attachments: KAFKA-42.patch, kafka-42-v1.patch, kafka-42-v2.patch, 
 kafka-42-v3.patch, kafka-42-v4.patch, kafka-42-v5.patch

   Original Estimate: 240h
  Remaining Estimate: 240h

 As new brokers are added, we need to support moving partition replicas from 
 one set of brokers to another, online.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (KAFKA-1003) ConsumerFetcherManager should pass clientId as metricsPrefix to AbstractFetcherManager

2013-09-16 Thread Swapnil Ghike (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13768771#comment-13768771
 ] 

Swapnil Ghike commented on KAFKA-1003:
--

Created reviewboard 

 ConsumerFetcherManager should pass clientId as metricsPrefix to 
 AbstractFetcherManager
 --

 Key: KAFKA-1003
 URL: https://issues.apache.org/jira/browse/KAFKA-1003
 Project: Kafka
  Issue Type: Bug
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Fix For: 0.8

 Attachments: kafka-1003.patch


 For consistency. We use clientId in the metric names elsewhere on clients.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (KAFKA-1003) ConsumerFetcherManager should pass clientId as metricsPrefix to AbstractFetcherManager

2013-09-16 Thread Swapnil Ghike (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13768770#comment-13768770
 ] 

Swapnil Ghike commented on KAFKA-1003:
--

Created reviewboard 

 ConsumerFetcherManager should pass clientId as metricsPrefix to 
 AbstractFetcherManager
 --

 Key: KAFKA-1003
 URL: https://issues.apache.org/jira/browse/KAFKA-1003
 Project: Kafka
  Issue Type: Bug
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Fix For: 0.8

 Attachments: kafka-1003.patch


 For consistency. We use clientId in the metric names elsewhere on clients.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (KAFKA-1003) ConsumerFetcherManager should pass clientId as metricsPrefix to AbstractFetcherManager

2013-09-16 Thread Swapnil Ghike (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Ghike updated KAFKA-1003:
-

Attachment: KAFKA-1003.patch

 ConsumerFetcherManager should pass clientId as metricsPrefix to 
 AbstractFetcherManager
 --

 Key: KAFKA-1003
 URL: https://issues.apache.org/jira/browse/KAFKA-1003
 Project: Kafka
  Issue Type: Bug
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Fix For: 0.8

 Attachments: kafka-1003.patch


 For consistency. We use clientId in the metric names elsewhere on clients.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (KAFKA-1003) ConsumerFetcherManager should pass clientId as metricsPrefix to AbstractFetcherManager

2013-09-16 Thread Swapnil Ghike (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Ghike updated KAFKA-1003:
-

Attachment: KAFKA-1003.patch

 ConsumerFetcherManager should pass clientId as metricsPrefix to 
 AbstractFetcherManager
 --

 Key: KAFKA-1003
 URL: https://issues.apache.org/jira/browse/KAFKA-1003
 Project: Kafka
  Issue Type: Bug
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Fix For: 0.8

 Attachments: kafka-1003.patch


 For consistency. We use clientId in the metric names elsewhere on clients.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (KAFKA-1003) ConsumerFetcherManager should pass clientId as metricsPrefix to AbstractFetcherManager

2013-09-16 Thread Swapnil Ghike (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Ghike updated KAFKA-1003:
-

Attachment: (was: KAFKA-1003.patch)

 ConsumerFetcherManager should pass clientId as metricsPrefix to 
 AbstractFetcherManager
 --

 Key: KAFKA-1003
 URL: https://issues.apache.org/jira/browse/KAFKA-1003
 Project: Kafka
  Issue Type: Bug
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Fix For: 0.8

 Attachments: kafka-1003.patch


 For consistency. We use clientId in the metric names elsewhere on clients.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (KAFKA-1003) ConsumerFetcherManager should pass clientId as metricsPrefix to AbstractFetcherManager

2013-09-16 Thread Swapnil Ghike (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Ghike updated KAFKA-1003:
-

Attachment: (was: KAFKA-1003.patch)

 ConsumerFetcherManager should pass clientId as metricsPrefix to 
 AbstractFetcherManager
 --

 Key: KAFKA-1003
 URL: https://issues.apache.org/jira/browse/KAFKA-1003
 Project: Kafka
  Issue Type: Bug
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Fix For: 0.8

 Attachments: kafka-1003.patch


 For consistency. We use clientId in the metric names elsewhere on clients.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (KAFKA-1003) ConsumerFetcherManager should pass clientId as metricsPrefix to AbstractFetcherManager

2013-09-16 Thread Swapnil Ghike (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Ghike updated KAFKA-1003:
-

Attachment: KAFKA-1003.patch

 ConsumerFetcherManager should pass clientId as metricsPrefix to 
 AbstractFetcherManager
 --

 Key: KAFKA-1003
 URL: https://issues.apache.org/jira/browse/KAFKA-1003
 Project: Kafka
  Issue Type: Bug
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Fix For: 0.8

 Attachments: kafka-1003.patch


 For consistency. We use clientId in the metric names elsewhere on clients.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (KAFKA-42) Support rebalancing the partitions with replication

2013-09-16 Thread Swapnil Ghike (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-42?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13768747#comment-13768747
 ] 

Swapnil Ghike commented on KAFKA-42:


Created reviewboard 

 Support rebalancing the partitions with replication
 ---

 Key: KAFKA-42
 URL: https://issues.apache.org/jira/browse/KAFKA-42
 Project: Kafka
  Issue Type: Bug
  Components: core
Reporter: Jun Rao
Assignee: Neha Narkhede
Priority: Blocker
  Labels: features
 Fix For: 0.8

 Attachments: KAFKA-42.patch, kafka-42-v1.patch, kafka-42-v2.patch, 
 kafka-42-v3.patch, kafka-42-v4.patch, kafka-42-v5.patch

   Original Estimate: 240h
  Remaining Estimate: 240h

 As new brokers are added, we need to support moving partition replicas from 
 one set of brokers to another, online.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (KAFKA-42) Support rebalancing the partitions with replication

2013-09-16 Thread Swapnil Ghike (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-42?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Ghike updated KAFKA-42:
---

Attachment: (was: KAFKA-42.patch)

 Support rebalancing the partitions with replication
 ---

 Key: KAFKA-42
 URL: https://issues.apache.org/jira/browse/KAFKA-42
 Project: Kafka
  Issue Type: Bug
  Components: core
Reporter: Jun Rao
Assignee: Neha Narkhede
Priority: Blocker
  Labels: features
 Fix For: 0.8

 Attachments: kafka-42-v1.patch, kafka-42-v2.patch, kafka-42-v3.patch, 
 kafka-42-v4.patch, kafka-42-v5.patch

   Original Estimate: 240h
  Remaining Estimate: 240h

 As new brokers are added, we need to support moving partition replicas from 
 one set of brokers to another, online.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Issue Comment Deleted] (KAFKA-1003) ConsumerFetcherManager should pass clientId as metricsPrefix to AbstractFetcherManager

2013-09-16 Thread Swapnil Ghike (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Ghike updated KAFKA-1003:
-

Comment: was deleted

(was: Created reviewboard https://reviews.apache.org/r/14161/
)

 ConsumerFetcherManager should pass clientId as metricsPrefix to 
 AbstractFetcherManager
 --

 Key: KAFKA-1003
 URL: https://issues.apache.org/jira/browse/KAFKA-1003
 Project: Kafka
  Issue Type: Bug
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Fix For: 0.8

 Attachments: kafka-1003.patch


 For consistency. We use clientId in the metric names elsewhere on clients.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (KAFKA-1003) ConsumerFetcherManager should pass clientId as metricsPrefix to AbstractFetcherManager

2013-09-15 Thread Swapnil Ghike (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13767962#comment-13767962
 ] 

Swapnil Ghike commented on KAFKA-1003:
--

Created reviewboard 

 ConsumerFetcherManager should pass clientId as metricsPrefix to 
 AbstractFetcherManager
 --

 Key: KAFKA-1003
 URL: https://issues.apache.org/jira/browse/KAFKA-1003
 Project: Kafka
  Issue Type: Bug
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Fix For: 0.8

 Attachments: kafka-1003.patch, KAFKA-1003.patch, KAFKA-1003.patch, 
 KAFKA-1003.patch


 For consistency. We use clientId in the metric names elsewhere on clients.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (KAFKA-1003) ConsumerFetcherManager should pass clientId as metricsPrefix to AbstractFetcherManager

2013-09-15 Thread Swapnil Ghike (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Ghike updated KAFKA-1003:
-

Attachment: KAFKA-1003.patch

 ConsumerFetcherManager should pass clientId as metricsPrefix to 
 AbstractFetcherManager
 --

 Key: KAFKA-1003
 URL: https://issues.apache.org/jira/browse/KAFKA-1003
 Project: Kafka
  Issue Type: Bug
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Fix For: 0.8

 Attachments: kafka-1003.patch, KAFKA-1003.patch, KAFKA-1003.patch, 
 KAFKA-1003.patch


 For consistency. We use clientId in the metric names elsewhere on clients.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (KAFKA-1003) ConsumerFetcherManager should pass clientId as metricsPrefix to AbstractFetcherManager

2013-09-15 Thread Swapnil Ghike (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Ghike updated KAFKA-1003:
-

Attachment: KAFKA-1003.patch

 ConsumerFetcherManager should pass clientId as metricsPrefix to 
 AbstractFetcherManager
 --

 Key: KAFKA-1003
 URL: https://issues.apache.org/jira/browse/KAFKA-1003
 Project: Kafka
  Issue Type: Bug
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Fix For: 0.8

 Attachments: kafka-1003.patch, KAFKA-1003.patch, KAFKA-1003.patch, 
 KAFKA-1003.patch


 For consistency. We use clientId in the metric names elsewhere on clients.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (KAFKA-1003) ConsumerFetcherManager should pass clientId as metricsPrefix to AbstractFetcherManager

2013-09-15 Thread Swapnil Ghike (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13767963#comment-13767963
 ] 

Swapnil Ghike commented on KAFKA-1003:
--

Created reviewboard 

 ConsumerFetcherManager should pass clientId as metricsPrefix to 
 AbstractFetcherManager
 --

 Key: KAFKA-1003
 URL: https://issues.apache.org/jira/browse/KAFKA-1003
 Project: Kafka
  Issue Type: Bug
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Fix For: 0.8

 Attachments: kafka-1003.patch, KAFKA-1003.patch, KAFKA-1003.patch, 
 KAFKA-1003.patch


 For consistency. We use clientId in the metric names elsewhere on clients.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (KAFKA-1053) Kafka patch review tool

2013-09-15 Thread Swapnil Ghike (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13767967#comment-13767967
 ] 

Swapnil Ghike edited comment on KAFKA-1053 at 9/16/13 12:38 AM:


Hmm, tried setting up the tool according to the instruction for RHEL. Ran into 
this:
~/kafka/kafka$ python kafka-patch-review.py --help
Traceback (most recent call last):
  File kafka-patch-review.py, line 3, in module
import argparse
ImportError: No module named argparse

Does the easy_install think work only on Mac? (jira-python and RBTools are 
installed using easy_install)?

On the Mac, I got this:
~/kafka-local/kafka$ echo $JIRA_CMDLINE_HOME 
.
~/kafka-local/kafka$ python kafka-patch-review.py -b 0.8 -j KAFKA-1003 -db
Jira Home= .
git diff 0.8  KAFKA-1003.patch
Creating diff against 0.8 and uploading patch to JIRA KAFKA-1003
Creating a new reviewboard
post-review --publish --tracking-branch 0.8 --target-groups=kafka 
--bugs-closed=KAFKA-1003 --summary Patch for KAFKA-1003
There don't seem to be any diffs!

rb url= 

If you take a look at KAFKA-1003, it has appended my diffs, it just did not 
crete a review board. I guess this is expected.


  was (Author: swapnilghike):
Hmm, tried setting up the tool according to the instruction for RHEL. Ran 
into this:
~/kafka/kafka$ python kafka-patch-review.py --help
Traceback (most recent call last):
  File kafka-patch-review.py, line 3, in module
import argparse
ImportError: No module named argparse

Does the easy_install think work only on Mac? (jira-python and RBTools are 
installed using easy_install)?

On the Mac, I got this:
~/kafka-local/kafka$ echo $JIRA_CMDLINE_HOME 
.
~/kafka-local/kafka$ python kafka-patch-review.py -b 0.8 -j KAFKA-1003 -db
Jira Home= .
git diff 0.8  KAFKA-1003.patch
Creating diff against 0.8 and uploading patch to JIRA KAFKA-1003
Creating a new reviewboard
post-review --publish --tracking-branch 0.8 --target-groups=kafka 
--bugs-closed=KAFKA-1003 --summary Patch for KAFKA-1003
There don't seem to be any diffs!

rb url= 

  
 Kafka patch review tool
 ---

 Key: KAFKA-1053
 URL: https://issues.apache.org/jira/browse/KAFKA-1053
 Project: Kafka
  Issue Type: New Feature
  Components: tools
Reporter: Neha Narkhede
Assignee: Neha Narkhede
 Attachments: KAFKA-1053-2013-09-15_09:40:04.patch, 
 KAFKA-1053-followup2.patch, KAFKA-1053-followup.patch, KAFKA-1053-v1.patch, 
 KAFKA-1053-v1.patch, KAFKA-1053-v1.patch, KAFKA-1053-v2.patch, 
 KAFKA-1053-v3.patch


 Created a new patch review tool that will integrate JIRA and reviewboard - 
 https://cwiki.apache.org/confluence/display/KAFKA/Kafka+patch+review+tool

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (KAFKA-1003) ConsumerFetcherManager should pass clientId as metricsPrefix to AbstractFetcherManager

2013-09-15 Thread Swapnil Ghike (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13767976#comment-13767976
 ] 

Swapnil Ghike commented on KAFKA-1003:
--

Created reviewboard 

 ConsumerFetcherManager should pass clientId as metricsPrefix to 
 AbstractFetcherManager
 --

 Key: KAFKA-1003
 URL: https://issues.apache.org/jira/browse/KAFKA-1003
 Project: Kafka
  Issue Type: Bug
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Fix For: 0.8

 Attachments: kafka-1003.patch, KAFKA-1003.patch, KAFKA-1003.patch, 
 KAFKA-1003.patch, KAFKA-1003.patch, KAFKA-1003.patch


 For consistency. We use clientId in the metric names elsewhere on clients.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (KAFKA-1030) Addition of partitions requires bouncing all the consumers of that topic

2013-09-10 Thread Swapnil Ghike (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Ghike updated KAFKA-1030:
-

Assignee: Guozhang Wang  (was: Swapnil Ghike)

 Addition of partitions requires bouncing all the consumers of that topic
 

 Key: KAFKA-1030
 URL: https://issues.apache.org/jira/browse/KAFKA-1030
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8
Reporter: Swapnil Ghike
Assignee: Guozhang Wang
Priority: Blocker
 Fix For: 0.8


 Consumer may not notice new partitions because the propagation of the 
 metadata to servers can be delayed. 
 Options:
 1. As Jun suggested on KAFKA-956, the easiest fix would be to read the new 
 partition data from zookeeper instead of a kafka server.
 2. Run a fetch metadata loop in consumer, and set auto.offset.reset to 
 smallest once the consumer has started.
 1 sounds easier to do. If 1 causes long delays in reading all partitions at 
 the start of every rebalance, 2 may be worth considering.
  
 The same issue affects MirrorMaker when new topics are created, MirrorMaker 
 may not notice all partitions of the new topics until the next rebalance.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (KAFKA-1006) Mirror maker loses messages of a new topic

2013-09-05 Thread Swapnil Ghike (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Ghike updated KAFKA-1006:
-

Description: 
Consumer currently uses auto.offset.reset = largest by default. If a new topic 
is created, consumer's topic watcher is fired. The consumer will first finish 
partition reassignment as part of rebalance and then start consuming from the 
tail of each partition. Until the partition reassignment is over, the server 
may have appended new messages to the new topic, consumer won't consume these 
messages. Thus, multiple batches of messages may be lost when a topic is newly 
created. 

The fix is to start consuming from the earliest offset for newly created topics.

  was:
Mirror maker currently uses auto.offset.reset = largest on the consumer side by 
default. If a new topic is created, consumer's topic watcher is fired. The 
consumer will first finish partition reassignment as part of rebalance and then 
start consuming from the tail of each partition. Until the partition 
reassignment is over, the server may have appended new messages to the new 
topic, mirror maker won't consume these messages. Thus, multiple batches of 
messages may be lost when a topic is newly created.

The fix is to start consuming from the earliest offset for newly created topics.


 Mirror maker loses messages of a new topic
 --

 Key: KAFKA-1006
 URL: https://issues.apache.org/jira/browse/KAFKA-1006
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8
Reporter: Swapnil Ghike

 Consumer currently uses auto.offset.reset = largest by default. If a new 
 topic is created, consumer's topic watcher is fired. The consumer will first 
 finish partition reassignment as part of rebalance and then start consuming 
 from the tail of each partition. Until the partition reassignment is over, 
 the server may have appended new messages to the new topic, consumer won't 
 consume these messages. Thus, multiple batches of messages may be lost when a 
 topic is newly created. 
 The fix is to start consuming from the earliest offset for newly created 
 topics.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (KAFKA-1006) Consumer loses messages of a new topic with auto.offset.reset = largest

2013-09-05 Thread Swapnil Ghike (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Ghike updated KAFKA-1006:
-

Summary: Consumer loses messages of a new topic with auto.offset.reset = 
largest  (was: Mirror maker loses messages of a new topic)

 Consumer loses messages of a new topic with auto.offset.reset = largest
 ---

 Key: KAFKA-1006
 URL: https://issues.apache.org/jira/browse/KAFKA-1006
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8
Reporter: Swapnil Ghike

 Consumer currently uses auto.offset.reset = largest by default. If a new 
 topic is created, consumer's topic watcher is fired. The consumer will first 
 finish partition reassignment as part of rebalance and then start consuming 
 from the tail of each partition. Until the partition reassignment is over, 
 the server may have appended new messages to the new topic, consumer won't 
 consume these messages. Thus, multiple batches of messages may be lost when a 
 topic is newly created. 
 The fix is to start consuming from the earliest offset for newly created 
 topics.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (KAFKA-1030) Addition of partitions requires bouncing all the consumers of that topic

2013-08-28 Thread Swapnil Ghike (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Ghike updated KAFKA-1030:
-

Description: Consumer may not notice new partitions because the propagation 
of the metadata to servers can be delayed. 

 Addition of partitions requires bouncing all the consumers of that topic
 

 Key: KAFKA-1030
 URL: https://issues.apache.org/jira/browse/KAFKA-1030
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
Priority: Blocker
 Fix For: 0.8


 Consumer may not notice new partitions because the propagation of the 
 metadata to servers can be delayed. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (KAFKA-1030) Addition of partitions requires bouncing all the consumers of that topic

2013-08-28 Thread Swapnil Ghike (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Ghike updated KAFKA-1030:
-

Description: 
Consumer may not notice new partitions because the propagation of the metadata 
to servers can be delayed. 

Options:
1. 
2. Run a fetch metadata loop in consumer, and set auto.offset.reset to smallest 
once the consumer has started.
 

  was:Consumer may not notice new partitions because the propagation of the 
metadata to servers can be delayed. 


 Addition of partitions requires bouncing all the consumers of that topic
 

 Key: KAFKA-1030
 URL: https://issues.apache.org/jira/browse/KAFKA-1030
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
Priority: Blocker
 Fix For: 0.8


 Consumer may not notice new partitions because the propagation of the 
 metadata to servers can be delayed. 
 Options:
 1. 
 2. Run a fetch metadata loop in consumer, and set auto.offset.reset to 
 smallest once the consumer has started.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (KAFKA-1030) Addition of partitions requires bouncing all the consumers of that topic

2013-08-28 Thread Swapnil Ghike (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752679#comment-13752679
 ] 

Swapnil Ghike commented on KAFKA-1030:
--

Hmm, this will mean that the consumer client will cease to be controller 
agnostic. Is that a good idea? Plus if there is a controller failover at the 
same time as a consumer trying to fetch metadata, the broker the consumer was 
talking to for fetching metadata may have stale metadata. So, we may need to 
implement a controller failover watcher on consumer to trigger fetching 
metadata. Thoughts?

 Addition of partitions requires bouncing all the consumers of that topic
 

 Key: KAFKA-1030
 URL: https://issues.apache.org/jira/browse/KAFKA-1030
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
Priority: Blocker
 Fix For: 0.8


 Consumer may not notice new partitions because the propagation of the 
 metadata to servers can be delayed. 
 Options:
 1. As Jun suggested on KAFKA-956, the easiest fix would be to read the new 
 partition data from zookeeper instead of a kafka server.
 2. Run a fetch metadata loop in consumer, and set auto.offset.reset to 
 smallest once the consumer has started.
 1 sounds easier to do. If 1 causes long delays in reading all partitions at 
 the start of every rebalance, 2 may be worth considering.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (KAFKA-1026) Dynamically Adjust Batch Size Upon Receiving MessageSizeTooLargeException

2013-08-27 Thread Swapnil Ghike (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13751519#comment-13751519
 ] 

Swapnil Ghike commented on KAFKA-1026:
--

As David Demaagd had mentioned earlier, it will be useful to have a 
bytes/numMessages/time based decision in sending the messages.

 Dynamically Adjust Batch Size Upon Receiving MessageSizeTooLargeException
 -

 Key: KAFKA-1026
 URL: https://issues.apache.org/jira/browse/KAFKA-1026
 Project: Kafka
  Issue Type: Bug
Reporter: Guozhang Wang
 Fix For: 0.8.1


 Among the exceptions that can possibly received in Producer.send(), 
 MessageSizeTooLargeException is currently not recoverable since the producer 
 does not change the batch size but still retries on sending. It is better to 
 have a dynamic batch size adjustment mechanism based on 
 MessageSizeTooLargeException.
 This is related to KAFKA-998

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (KAFKA-1017) High number of open file handles in 0.8 producer

2013-08-19 Thread Swapnil Ghike (JIRA)
Swapnil Ghike created KAFKA-1017:


 Summary: High number of open file handles in 0.8 producer
 Key: KAFKA-1017
 URL: https://issues.apache.org/jira/browse/KAFKA-1017
 Project: Kafka
  Issue Type: Bug
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike


For over-partitioned topics, each broker could be the leader for at least 1 
partition. In the producer, we randomly select a partition to send the data. 
Pretty soon, each producer will establish a connection to each of the n 
brokers. Effectively, we increased the # of socket connections by a factor of 
n, compared to 0.7.

The increased number of socket connections increases the number of open file 
handles, this could come pretty  close to the OS limit if left unnoticed. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (KAFKA-1017) High number of open file handles in 0.8 producer

2013-08-19 Thread Swapnil Ghike (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Ghike updated KAFKA-1017:
-

Attachment: kafka-1017.patch

This patch fixes two issues:

1. Reduce the number of open file handles on the producer: I think we can clear 
the topic-partition send-cache on the EventHandler once in every refresh 
metadata interval. This will make the producer produce to the same partition 
for say, 15 mins, for a given topic. That should be ok. The topic-partition 
send-cache will be refreshed when the metadata is refreshed.

2. There is a race condition in Producer.send() and Producer.close(). This can 
lead to reopening of a closed ProducerPool and thereby socket leaks.

 High number of open file handles in 0.8 producer
 

 Key: KAFKA-1017
 URL: https://issues.apache.org/jira/browse/KAFKA-1017
 Project: Kafka
  Issue Type: Bug
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike
 Attachments: kafka-1017.patch


 Reported by Jun Rao:
 For over-partitioned topics, each broker could be the leader for at least 1 
 partition. In the producer, we randomly select a partition to send the data. 
 Pretty soon, each producer will establish a connection to each of the n 
 brokers. Effectively, we increased the # of socket connections by a factor of 
 n, compared to 0.7.
 The increased number of socket connections increases the number of open file 
 handles, this could come pretty  close to the OS limit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (KAFKA-990) Fix ReassignPartitionCommand and improve usability

2013-08-12 Thread Swapnil Ghike (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13737162#comment-13737162
 ] 

Swapnil Ghike commented on KAFKA-990:
-

The rebased patch also failed for me on 0.8 HEAD


 Fix ReassignPartitionCommand and improve usability
 --

 Key: KAFKA-990
 URL: https://issues.apache.org/jira/browse/KAFKA-990
 Project: Kafka
  Issue Type: Bug
Reporter: Sriram Subramanian
Assignee: Sriram Subramanian
 Attachments: KAFKA-990-v1.patch, KAFKA-990-v1-rebased.patch


 1. The tool does not register for IsrChangeListener on controller failover.
 2. There is a race condition where the previous listener can fire on 
 controller failover and the replicas can be in ISR. Even after re-registering 
 the ISR listener after failover, it will never be triggered.
 3. The input the tool is a static list which is very hard to use. To improve 
 this, as a first step the tool needs to take a list of topics and list of 
 brokers to do the assignment to and then generate the reassignment plan.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (KAFKA-990) Fix ReassignPartitionCommand and improve usability

2013-08-12 Thread Swapnil Ghike (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13737162#comment-13737162
 ] 

Swapnil Ghike edited comment on KAFKA-990 at 8/12/13 6:23 PM:
--

The rebased patch also failed for me on 0.8 HEAD

$patch -p1 --dry-run  ~/Downloads/KAFKA-990-v1-rebased.patch 
patching file core/src/main/scala/kafka/admin/ReassignPartitionsCommand.scala
patching file core/src/main/scala/kafka/utils/ZkUtils.scala
Hunk #1 succeeded at 620 (offset 42 lines).
patching file core/src/main/scala/kafka/admin/ReassignPartitionsCommand.scala
Hunk #1 FAILED at 29.
Hunk #2 FAILED at 81.


  was (Author: swapnilghike):
The rebased patch also failed for me on 0.8 HEAD

  
 Fix ReassignPartitionCommand and improve usability
 --

 Key: KAFKA-990
 URL: https://issues.apache.org/jira/browse/KAFKA-990
 Project: Kafka
  Issue Type: Bug
Reporter: Sriram Subramanian
Assignee: Sriram Subramanian
 Attachments: KAFKA-990-v1.patch, KAFKA-990-v1-rebased.patch


 1. The tool does not register for IsrChangeListener on controller failover.
 2. There is a race condition where the previous listener can fire on 
 controller failover and the replicas can be in ISR. Even after re-registering 
 the ISR listener after failover, it will never be triggered.
 3. The input the tool is a static list which is very hard to use. To improve 
 this, as a first step the tool needs to take a list of topics and list of 
 brokers to do the assignment to and then generate the reassignment plan.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (KAFKA-990) Fix ReassignPartitionCommand and improve usability

2013-08-12 Thread Swapnil Ghike (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13737199#comment-13737199
 ] 

Swapnil Ghike commented on KAFKA-990:
-

v1 works with git apply.

 Fix ReassignPartitionCommand and improve usability
 --

 Key: KAFKA-990
 URL: https://issues.apache.org/jira/browse/KAFKA-990
 Project: Kafka
  Issue Type: Bug
Reporter: Sriram Subramanian
Assignee: Sriram Subramanian
 Attachments: KAFKA-990-v1.patch, KAFKA-990-v1-rebased.patch


 1. The tool does not register for IsrChangeListener on controller failover.
 2. There is a race condition where the previous listener can fire on 
 controller failover and the replicas can be in ISR. Even after re-registering 
 the ISR listener after failover, it will never be triggered.
 3. The input the tool is a static list which is very hard to use. To improve 
 this, as a first step the tool needs to take a list of topics and list of 
 brokers to do the assignment to and then generate the reassignment plan.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (KAFKA-1006) Mirror maker loses messages of a new topic

2013-08-09 Thread Swapnil Ghike (JIRA)
Swapnil Ghike created KAFKA-1006:


 Summary: Mirror maker loses messages of a new topic
 Key: KAFKA-1006
 URL: https://issues.apache.org/jira/browse/KAFKA-1006
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8
Reporter: Swapnil Ghike


Mirror maker currently uses auto.offset.reset = largest on the consumer side by 
default. If a new topic is created, consumer's topic watcher is fired. The 
consumer will first finish partition reassignment as part of rebalance and then 
start consuming from the tail of each partition. Until the partition 
reassignment is over, the server may have appended new messages to the new 
topic, mirror maker won't consume these messages. Thus, multiple batches of 
messages may be lost when a topic is newly created.

The fix is to start consuming from the earliest offset for newly created topics.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (KAFKA-999) Controlled shutdown never succeeds until the broker is killed

2013-08-06 Thread Swapnil Ghike (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13730439#comment-13730439
 ] 

Swapnil Ghike commented on KAFKA-999:
-

Since we need leader broker's host:port to create ReplicaFetcherThread, the 
easiest fix for this ticket's purpose seems to be to pass all leaders through 
LeaderAndIsrRequest:

val leaders = liveOrShuttingDownBrokers.filter(b = leaderIds.contains(b.id))
val leaderAndIsrRequest = new LeaderAndIsrRequest(partitionStateInfos, leaders, 
controllerId, controllerEpoch, correlationId, clientId)

Any suggestions on avoiding a wire protocol change?

 Controlled shutdown never succeeds until the broker is killed
 -

 Key: KAFKA-999
 URL: https://issues.apache.org/jira/browse/KAFKA-999
 Project: Kafka
  Issue Type: Bug
  Components: controller
Affects Versions: 0.8
Reporter: Neha Narkhede
Assignee: Neha Narkhede
Priority: Critical

 A race condition in the way leader and isr request is handled by the broker 
 and controlled shutdown can lead to a situation where controlled shutdown can 
 never succeed and the only way to bounce the broker is to kill it.
 The root cause is that broker uses a smart to avoid fetching from a leader 
 that is not alive according to the controller. This leads to the broker 
 aborting a become follower request. And in cases where replication factor is 
 2, the leader can never be transferred to a follower since it keeps rejecting 
 the become follower request and stays out of the ISR. This causes controlled 
 shutdown to fail forever
 One sequence of events that led to this bug is as follows -
 - Broker 2 is leader and controller
 - Broker 2 is bounced (uncontrolled shutdown)
 - Controller fails over
 - Controlled shutdown is invoked on broker 1
 - Controller starts leader election for partitions that broker 2 led
 - Controller sends become follower request with leader as broker 1 to broker 
 2. At the same time, it does not include broker 1 in alive broker list sent 
 as part of leader and isr request
 - Broker 2 rejects leaderAndIsr request since leader is not in the list of 
 alive brokers
 - Broker 1 fails to transfer leadership to broker 2 since broker 2 is not in 
 ISR
 - Controlled shutdown can never succeed on broker 1
 Since controlled shutdown is a config option, if there are bugs in controlled 
 shutdown, there is no option but to kill the broker

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (KAFKA-999) Controlled shutdown never succeeds until the broker is killed

2013-08-06 Thread Swapnil Ghike (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13731174#comment-13731174
 ] 

Swapnil Ghike commented on KAFKA-999:
-

Actually that's not needed, will get a patch out in a couple hours.

 Controlled shutdown never succeeds until the broker is killed
 -

 Key: KAFKA-999
 URL: https://issues.apache.org/jira/browse/KAFKA-999
 Project: Kafka
  Issue Type: Bug
  Components: controller
Affects Versions: 0.8
Reporter: Neha Narkhede
Assignee: Neha Narkhede
Priority: Critical

 A race condition in the way leader and isr request is handled by the broker 
 and controlled shutdown can lead to a situation where controlled shutdown can 
 never succeed and the only way to bounce the broker is to kill it.
 The root cause is that broker uses a smart to avoid fetching from a leader 
 that is not alive according to the controller. This leads to the broker 
 aborting a become follower request. And in cases where replication factor is 
 2, the leader can never be transferred to a follower since it keeps rejecting 
 the become follower request and stays out of the ISR. This causes controlled 
 shutdown to fail forever
 One sequence of events that led to this bug is as follows -
 - Broker 2 is leader and controller
 - Broker 2 is bounced (uncontrolled shutdown)
 - Controller fails over
 - Controlled shutdown is invoked on broker 1
 - Controller starts leader election for partitions that broker 2 led
 - Controller sends become follower request with leader as broker 1 to broker 
 2. At the same time, it does not include broker 1 in alive broker list sent 
 as part of leader and isr request
 - Broker 2 rejects leaderAndIsr request since leader is not in the list of 
 alive brokers
 - Broker 1 fails to transfer leadership to broker 2 since broker 2 is not in 
 ISR
 - Controlled shutdown can never succeed on broker 1
 Since controlled shutdown is a config option, if there are bugs in controlled 
 shutdown, there is no option but to kill the broker

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Issue Comment Deleted] (KAFKA-999) Controlled shutdown never succeeds until the broker is killed

2013-08-06 Thread Swapnil Ghike (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Ghike updated KAFKA-999:


Comment: was deleted

(was: Actually that's not needed, will get a patch out in a couple hours.)

 Controlled shutdown never succeeds until the broker is killed
 -

 Key: KAFKA-999
 URL: https://issues.apache.org/jira/browse/KAFKA-999
 Project: Kafka
  Issue Type: Bug
  Components: controller
Affects Versions: 0.8
Reporter: Neha Narkhede
Assignee: Neha Narkhede
Priority: Critical

 A race condition in the way leader and isr request is handled by the broker 
 and controlled shutdown can lead to a situation where controlled shutdown can 
 never succeed and the only way to bounce the broker is to kill it.
 The root cause is that broker uses a smart to avoid fetching from a leader 
 that is not alive according to the controller. This leads to the broker 
 aborting a become follower request. And in cases where replication factor is 
 2, the leader can never be transferred to a follower since it keeps rejecting 
 the become follower request and stays out of the ISR. This causes controlled 
 shutdown to fail forever
 One sequence of events that led to this bug is as follows -
 - Broker 2 is leader and controller
 - Broker 2 is bounced (uncontrolled shutdown)
 - Controller fails over
 - Controlled shutdown is invoked on broker 1
 - Controller starts leader election for partitions that broker 2 led
 - Controller sends become follower request with leader as broker 1 to broker 
 2. At the same time, it does not include broker 1 in alive broker list sent 
 as part of leader and isr request
 - Broker 2 rejects leaderAndIsr request since leader is not in the list of 
 alive brokers
 - Broker 1 fails to transfer leadership to broker 2 since broker 2 is not in 
 ISR
 - Controlled shutdown can never succeed on broker 1
 Since controlled shutdown is a config option, if there are bugs in controlled 
 shutdown, there is no option but to kill the broker

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (KAFKA-999) Controlled shutdown never succeeds until the broker is killed

2013-08-06 Thread Swapnil Ghike (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Ghike updated KAFKA-999:


Attachment: kafka-999-v1.patch

 Controlled shutdown never succeeds until the broker is killed
 -

 Key: KAFKA-999
 URL: https://issues.apache.org/jira/browse/KAFKA-999
 Project: Kafka
  Issue Type: Bug
  Components: controller
Affects Versions: 0.8
Reporter: Neha Narkhede
Assignee: Swapnil Ghike
Priority: Critical
 Attachments: kafka-999-v1.patch


 A race condition in the way leader and isr request is handled by the broker 
 and controlled shutdown can lead to a situation where controlled shutdown can 
 never succeed and the only way to bounce the broker is to kill it.
 The root cause is that broker uses a smart to avoid fetching from a leader 
 that is not alive according to the controller. This leads to the broker 
 aborting a become follower request. And in cases where replication factor is 
 2, the leader can never be transferred to a follower since it keeps rejecting 
 the become follower request and stays out of the ISR. This causes controlled 
 shutdown to fail forever
 One sequence of events that led to this bug is as follows -
 - Broker 2 is leader and controller
 - Broker 2 is bounced (uncontrolled shutdown)
 - Controller fails over
 - Controlled shutdown is invoked on broker 1
 - Controller starts leader election for partitions that broker 2 led
 - Controller sends become follower request with leader as broker 1 to broker 
 2. At the same time, it does not include broker 1 in alive broker list sent 
 as part of leader and isr request
 - Broker 2 rejects leaderAndIsr request since leader is not in the list of 
 alive brokers
 - Broker 1 fails to transfer leadership to broker 2 since broker 2 is not in 
 ISR
 - Controlled shutdown can never succeed on broker 1
 Since controlled shutdown is a config option, if there are bugs in controlled 
 shutdown, there is no option but to kill the broker

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (KAFKA-999) Controlled shutdown never succeeds until the broker is killed

2013-08-06 Thread Swapnil Ghike (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Ghike updated KAFKA-999:


Attachment: kafka-999-v2.patch

Thanks for pointing that out. Actually in ControllerChannelManager, we should 
rather pass liveOrShuttingDownBroker.filter(b = leaderIds.contains(b.id)) as 
the leaders to LeaderAndIsrRequest.

Attached patch v2.

 Controlled shutdown never succeeds until the broker is killed
 -

 Key: KAFKA-999
 URL: https://issues.apache.org/jira/browse/KAFKA-999
 Project: Kafka
  Issue Type: Bug
  Components: controller
Affects Versions: 0.8
Reporter: Neha Narkhede
Assignee: Swapnil Ghike
Priority: Critical
 Attachments: kafka-999-v1.patch, kafka-999-v2.patch


 A race condition in the way leader and isr request is handled by the broker 
 and controlled shutdown can lead to a situation where controlled shutdown can 
 never succeed and the only way to bounce the broker is to kill it.
 The root cause is that broker uses a smart to avoid fetching from a leader 
 that is not alive according to the controller. This leads to the broker 
 aborting a become follower request. And in cases where replication factor is 
 2, the leader can never be transferred to a follower since it keeps rejecting 
 the become follower request and stays out of the ISR. This causes controlled 
 shutdown to fail forever
 One sequence of events that led to this bug is as follows -
 - Broker 2 is leader and controller
 - Broker 2 is bounced (uncontrolled shutdown)
 - Controller fails over
 - Controlled shutdown is invoked on broker 1
 - Controller starts leader election for partitions that broker 2 led
 - Controller sends become follower request with leader as broker 1 to broker 
 2. At the same time, it does not include broker 1 in alive broker list sent 
 as part of leader and isr request
 - Broker 2 rejects leaderAndIsr request since leader is not in the list of 
 alive brokers
 - Broker 1 fails to transfer leadership to broker 2 since broker 2 is not in 
 ISR
 - Controlled shutdown can never succeed on broker 1
 Since controlled shutdown is a config option, if there are bugs in controlled 
 shutdown, there is no option but to kill the broker

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (KAFKA-999) Controlled shutdown never succeeds until the broker is killed

2013-08-06 Thread Swapnil Ghike (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Ghike updated KAFKA-999:


Attachment: kafka-999-v3.patch

 Controlled shutdown never succeeds until the broker is killed
 -

 Key: KAFKA-999
 URL: https://issues.apache.org/jira/browse/KAFKA-999
 Project: Kafka
  Issue Type: Bug
  Components: controller
Affects Versions: 0.8
Reporter: Neha Narkhede
Assignee: Swapnil Ghike
Priority: Critical
 Attachments: kafka-999-v1.patch, kafka-999-v2.patch, 
 kafka-999-v3.patch


 A race condition in the way leader and isr request is handled by the broker 
 and controlled shutdown can lead to a situation where controlled shutdown can 
 never succeed and the only way to bounce the broker is to kill it.
 The root cause is that broker uses a smart to avoid fetching from a leader 
 that is not alive according to the controller. This leads to the broker 
 aborting a become follower request. And in cases where replication factor is 
 2, the leader can never be transferred to a follower since it keeps rejecting 
 the become follower request and stays out of the ISR. This causes controlled 
 shutdown to fail forever
 One sequence of events that led to this bug is as follows -
 - Broker 2 is leader and controller
 - Broker 2 is bounced (uncontrolled shutdown)
 - Controller fails over
 - Controlled shutdown is invoked on broker 1
 - Controller starts leader election for partitions that broker 2 led
 - Controller sends become follower request with leader as broker 1 to broker 
 2. At the same time, it does not include broker 1 in alive broker list sent 
 as part of leader and isr request
 - Broker 2 rejects leaderAndIsr request since leader is not in the list of 
 alive brokers
 - Broker 1 fails to transfer leadership to broker 2 since broker 2 is not in 
 ISR
 - Controlled shutdown can never succeed on broker 1
 Since controlled shutdown is a config option, if there are bugs in controlled 
 shutdown, there is no option but to kill the broker

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (KAFKA-999) Controlled shutdown never succeeds until the broker is killed

2013-08-06 Thread Swapnil Ghike (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Ghike updated KAFKA-999:


Attachment: (was: LIKAFKA-269-v3.patch)

 Controlled shutdown never succeeds until the broker is killed
 -

 Key: KAFKA-999
 URL: https://issues.apache.org/jira/browse/KAFKA-999
 Project: Kafka
  Issue Type: Bug
  Components: controller
Affects Versions: 0.8
Reporter: Neha Narkhede
Assignee: Swapnil Ghike
Priority: Critical
 Attachments: kafka-999-v1.patch, kafka-999-v2.patch, 
 kafka-999-v3.patch


 A race condition in the way leader and isr request is handled by the broker 
 and controlled shutdown can lead to a situation where controlled shutdown can 
 never succeed and the only way to bounce the broker is to kill it.
 The root cause is that broker uses a smart to avoid fetching from a leader 
 that is not alive according to the controller. This leads to the broker 
 aborting a become follower request. And in cases where replication factor is 
 2, the leader can never be transferred to a follower since it keeps rejecting 
 the become follower request and stays out of the ISR. This causes controlled 
 shutdown to fail forever
 One sequence of events that led to this bug is as follows -
 - Broker 2 is leader and controller
 - Broker 2 is bounced (uncontrolled shutdown)
 - Controller fails over
 - Controlled shutdown is invoked on broker 1
 - Controller starts leader election for partitions that broker 2 led
 - Controller sends become follower request with leader as broker 1 to broker 
 2. At the same time, it does not include broker 1 in alive broker list sent 
 as part of leader and isr request
 - Broker 2 rejects leaderAndIsr request since leader is not in the list of 
 alive brokers
 - Broker 1 fails to transfer leadership to broker 2 since broker 2 is not in 
 ISR
 - Controlled shutdown can never succeed on broker 1
 Since controlled shutdown is a config option, if there are bugs in controlled 
 shutdown, there is no option but to kill the broker

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (KAFKA-1003) ConsumerFetcherManager should pass clientId as metricsPrefix to AbstractFetcherManager

2013-08-06 Thread Swapnil Ghike (JIRA)
Swapnil Ghike created KAFKA-1003:


 Summary: ConsumerFetcherManager should pass clientId as 
metricsPrefix to AbstractFetcherManager
 Key: KAFKA-1003
 URL: https://issues.apache.org/jira/browse/KAFKA-1003
 Project: Kafka
  Issue Type: Bug
Reporter: Swapnil Ghike
Assignee: Swapnil Ghike


For consistency. We use clientId in the metric names elsewhere on clients.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


  1   2   3   >