[jira] [Updated] (KAFKA-5638) Inconsistency in consumer group related ACLs

2017-11-29 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-5638:
---
Labels: kip  (was: needs-kip)

> Inconsistency in consumer group related ACLs
> 
>
> Key: KAFKA-5638
> URL: https://issues.apache.org/jira/browse/KAFKA-5638
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.11.0.0
>Reporter: Vahid Hashemian
>Assignee: Vahid Hashemian
>Priority: Minor
>  Labels: kip
>
> Users can see all groups in the cluster (using consumer group’s {{--list}} 
> option) provided that they have {{Describe}} access to the cluster. It would 
> make more sense to modify that experience and limit what is listed in the 
> output to only those groups they have {{Describe}} access to. The reason is, 
> almost everything else is accessible by a user only if the access is 
> specifically granted (through ACL {{--add}}); and this scenario should not be 
> an exception. The potential change would be updating the minimum required 
> permission of {{ListGroup}} from {{Describe (Cluster)}} to {{Describe 
> (Group)}}.
> We can also look at this issue from a different angle: A user with {{Read}} 
> access to a group can describe the group, but the same user would not see 
> anything when listing groups (assuming there is no {{Describe}} access to the 
> cluster). It makes more sense for this user to be able to list all groups 
> s/he can already describe.
> It would be great to know if any user is relying on the existing behavior 
> (listing all consumer groups using a {{Describe (Cluster)}} ACL).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-6275) Extend consumer offset reset tool to support deletion

2017-11-28 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-6275:
---
Labels: kip  (was: needs-kip)

> Extend consumer offset reset tool to support deletion
> -
>
> Key: KAFKA-6275
> URL: https://issues.apache.org/jira/browse/KAFKA-6275
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Vahid Hashemian
>  Labels: kip
>
> It's useful to have a way to delete the offsets of a consumer group 
> explicitly. The reset tool already supports a number of different ways to 
> alter stored offsets, so perhaps we could add a {{--clear}} option. Note that 
> this would require a change to the OffsetCommit protocol which does not 
> currently support deletion. Perhaps if you commit an offset with a retention 
> time of 0, we can treat it as a deletion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (KAFKA-6118) Transient failure in kafka.api.SaslScramSslEndToEndAuthorizationTest.testTwoConsumersWithDifferentSaslCredentials

2017-11-28 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16269578#comment-16269578
 ] 

Vahid Hashemian edited comment on KAFKA-6118 at 11/28/17 10:19 PM:
---

I also hit this today with one of my PRs (JDK 9 and Scala 2.12): 
[link|https://pastebin.com/yBiDVu9F]


was (Author: vahid):
I also hit this today with one of my PRs: [link|https://pastebin.com/yBiDVu9F]

> Transient failure in 
> kafka.api.SaslScramSslEndToEndAuthorizationTest.testTwoConsumersWithDifferentSaslCredentials
> -
>
> Key: KAFKA-6118
> URL: https://issues.apache.org/jira/browse/KAFKA-6118
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core, unit tests
>Affects Versions: 1.0.0
>Reporter: Guozhang Wang
>
> Saw this failure on trunk jenkins job:
> https://builds.apache.org/job/kafka-pr-jdk9-scala2.12/2274/testReport/junit/kafka.api/SaslScramSslEndToEndAuthorizationTest/testTwoConsumersWithDifferentSaslCredentials/
> {code}
> Stacktrace
> org.apache.kafka.common.errors.GroupAuthorizationException: Not authorized to 
> access group: group
> Standard Output
> [2017-10-25 15:09:49,986] ERROR ZKShutdownHandler is not registered, so 
> ZooKeeper server won't take any action on ERROR or SHUTDOWN server state 
> changes (org.apache.zookeeper.server.ZooKeeperServer:472)
> Adding ACLs for resource `Cluster:kafka-cluster`: 
>   User:scram-admin has Allow permission for operations: ClusterAction 
> from hosts: * 
> Current ACLs for resource `Cluster:kafka-cluster`: 
>   User:scram-admin has Allow permission for operations: ClusterAction 
> from hosts: * 
> Completed Updating config for entity: user-principal 'scram-admin'.
> [2017-10-25 15:09:50,654] ERROR [ReplicaFetcher replicaId=0, leaderId=2, 
> fetcherId=0] Error for partition __consumer_offsets-0 from broker 2 
> (kafka.server.ReplicaFetcherThread:107)
> org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to 
> access topics: [Topic authorization failed.]
> [2017-10-25 15:09:50,654] ERROR [ReplicaFetcher replicaId=1, leaderId=2, 
> fetcherId=0] Error for partition __consumer_offsets-0 from broker 2 
> (kafka.server.ReplicaFetcherThread:107)
> org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to 
> access topics: [Topic authorization failed.]
> Adding ACLs for resource `Topic:*`: 
>   User:scram-admin has Allow permission for operations: Read from hosts: 
> * 
> Current ACLs for resource `Topic:*`: 
>   User:scram-admin has Allow permission for operations: Read from hosts: 
> * 
> Completed Updating config for entity: user-principal 'scram-user'.
> Completed Updating config for entity: user-principal 'scram-user2'.
> Adding ACLs for resource `Topic:e2etopic`: 
>   User:scram-user has Allow permission for operations: Write from hosts: *
>   User:scram-user has Allow permission for operations: Describe from 
> hosts: * 
> Adding ACLs for resource `Cluster:kafka-cluster`: 
>   User:scram-user has Allow permission for operations: Create from hosts: 
> * 
> Current ACLs for resource `Topic:e2etopic`: 
>   User:scram-user has Allow permission for operations: Write from hosts: *
>   User:scram-user has Allow permission for operations: Describe from 
> hosts: * 
> Adding ACLs for resource `Topic:e2etopic`: 
>   User:scram-user has Allow permission for operations: Read from hosts: *
>   User:scram-user has Allow permission for operations: Describe from 
> hosts: * 
> Adding ACLs for resource `Group:group`: 
>   User:scram-user has Allow permission for operations: Read from hosts: * 
> Current ACLs for resource `Topic:e2etopic`: 
>   User:scram-user has Allow permission for operations: Write from hosts: *
>   User:scram-user has Allow permission for operations: Describe from 
> hosts: *
>   User:scram-user has Allow permission for operations: Read from hosts: * 
> Current ACLs for resource `Group:group`: 
>   User:scram-user has Allow permission for operations: Read from hosts: * 
> [2017-10-25 15:09:52,788] ERROR Error while creating ephemeral at /controller 
> with return code: OK 
> (kafka.controller.KafkaControllerZkUtils$CheckedEphemeral:101)
> [2017-10-25 15:09:54,078] ERROR ZKShutdownHandler is not registered, so 
> ZooKeeper server won't take any action on ERROR or SHUTDOWN server state 
> changes (org.apache.zookeeper.server.ZooKeeperServer:472)
> [2017-10-25 15:09:54,112] ERROR ZKShutdownHandler is not registered, so 
> ZooKeeper server won't take any action on ERROR or SHUTDOWN server state 
> changes (org.apache.zookeeper.server.ZooKeeperServer:472)
> Adding ACLs for resource `Cluster:kafka-cluster`: 
>   User:scram-admin has Allow permission for 

[jira] [Commented] (KAFKA-6275) Extend consumer offset reset tool to support deletion

2017-11-27 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16267305#comment-16267305
 ] 

Vahid Hashemian commented on KAFKA-6275:


[~hachikuji] Thanks for trying to push KIP-175 forward. I was also thinking 
about having a DeleteOffsets API as you mentioned. I'll start drafting a KIP so 
the discussion can continue there.

> Extend consumer offset reset tool to support deletion
> -
>
> Key: KAFKA-6275
> URL: https://issues.apache.org/jira/browse/KAFKA-6275
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Vahid Hashemian
>  Labels: needs-kip
>
> It's useful to have a way to delete the offsets of a consumer group 
> explicitly. The reset tool already supports a number of different ways to 
> alter stored offsets, so perhaps we could add a {{--clear}} option. Note that 
> this would require a change to the OffsetCommit protocol which does not 
> currently support deletion. Perhaps if you commit an offset with a retention 
> time of 0, we can treat it as a deletion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (KAFKA-6275) Extend consumer offset reset tool to support deletion

2017-11-27 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16267187#comment-16267187
 ] 

Vahid Hashemian edited comment on KAFKA-6275 at 11/27/17 6:58 PM:
--

The suggested solution seems to also conflict with 
[KIP-211|https://cwiki.apache.org/confluence/display/KAFKA/KIP-211%3A+Revise+Expiration+Semantics+of+Consumer+Group+Offsets]
 and the decision around whether the retention time field should stay or go. 
KIP-211 currently suggests removing that field.


was (Author: vahid):
This seems to also conflict with 
[KIP-211|https://cwiki.apache.org/confluence/display/KAFKA/KIP-211%3A+Revise+Expiration+Semantics+of+Consumer+Group+Offsets]
 and the decision around whether the retention time field should stay or go. 
KIP-211 currently suggests removing that field.

> Extend consumer offset reset tool to support deletion
> -
>
> Key: KAFKA-6275
> URL: https://issues.apache.org/jira/browse/KAFKA-6275
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Vahid Hashemian
>  Labels: needs-kip
>
> It's useful to have a way to delete the offsets of a consumer group 
> explicitly. The reset tool already supports a number of different ways to 
> alter stored offsets, so perhaps we could add a {{--clear}} option. Note that 
> this would require a change to the OffsetCommit protocol which does not 
> currently support deletion. Perhaps if you commit an offset with a retention 
> time of 0, we can treat it as a deletion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6275) Extend consumer offset reset tool to support deletion

2017-11-27 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16267187#comment-16267187
 ] 

Vahid Hashemian commented on KAFKA-6275:


This seems to also conflict with 
[KIP-211|https://cwiki.apache.org/confluence/display/KAFKA/KIP-211%3A+Revise+Expiration+Semantics+of+Consumer+Group+Offsets]
 and the decision around whether the retention time field should stay or go. 
KIP-211 currently suggests removing that field.

> Extend consumer offset reset tool to support deletion
> -
>
> Key: KAFKA-6275
> URL: https://issues.apache.org/jira/browse/KAFKA-6275
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Vahid Hashemian
>  Labels: needs-kip
>
> It's useful to have a way to delete the offsets of a consumer group 
> explicitly. The reset tool already supports a number of different ways to 
> alter stored offsets, so perhaps we could add a {{--clear}} option. Note that 
> this would require a change to the OffsetCommit protocol which does not 
> currently support deletion. Perhaps if you commit an offset with a retention 
> time of 0, we can treat it as a deletion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (KAFKA-6275) Extend consumer offset reset tool to support deletion

2017-11-27 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16267140#comment-16267140
 ] 

Vahid Hashemian edited comment on KAFKA-6275 at 11/27/17 5:59 PM:
--

[~hachikuji] I'd like to work on this. Is it possible to push 
[KIP-175|https://cwiki.apache.org/confluence/display/KAFKA/KIP-175%3A+Additional+%27--describe%27+views+for+ConsumerGroupCommand]
 forward to avoid conflict between that and the changes (and the KIP) required 
for this JIRA? Thanks.


was (Author: vahid):
[~hachikuji] I'd like to work on this. Is it possible to push 
[KIP-175|https://cwiki.apache.org/confluence/display/KAFKA/KIP-175%3A+Additional+%27--describe%27+views+for+ConsumerGroupCommand]
 forward to avoid conflict between that and the KIP for this JIRA? Thanks.

> Extend consumer offset reset tool to support deletion
> -
>
> Key: KAFKA-6275
> URL: https://issues.apache.org/jira/browse/KAFKA-6275
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Vahid Hashemian
>  Labels: needs-kip
>
> It's useful to have a way to delete the offsets of a consumer group 
> explicitly. The reset tool already supports a number of different ways to 
> alter stored offsets, so perhaps we could add a {{--clear}} option. Note that 
> this would require a change to the OffsetCommit protocol which does not 
> currently support deletion. Perhaps if you commit an offset with a retention 
> time of 0, we can treat it as a deletion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6275) Extend consumer offset reset tool to support deletion

2017-11-27 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16267140#comment-16267140
 ] 

Vahid Hashemian commented on KAFKA-6275:


[~hachikuji] I'd like to work on this. Is it possible to push 
[KIP-175|https://cwiki.apache.org/confluence/display/KAFKA/KIP-175%3A+Additional+%27--describe%27+views+for+ConsumerGroupCommand]
 forward to avoid conflict between that and the KIP for this JIRA? Thanks.

> Extend consumer offset reset tool to support deletion
> -
>
> Key: KAFKA-6275
> URL: https://issues.apache.org/jira/browse/KAFKA-6275
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Vahid Hashemian
>  Labels: needs-kip
>
> It's useful to have a way to delete the offsets of a consumer group 
> explicitly. The reset tool already supports a number of different ways to 
> alter stored offsets, so perhaps we could add a {{--clear}} option. Note that 
> this would require a change to the OffsetCommit protocol which does not 
> currently support deletion. Perhaps if you commit an offset with a retention 
> time of 0, we can treat it as a deletion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-6275) Extend consumer offset reset tool to support deletion

2017-11-27 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian reassigned KAFKA-6275:
--

Assignee: Vahid Hashemian

> Extend consumer offset reset tool to support deletion
> -
>
> Key: KAFKA-6275
> URL: https://issues.apache.org/jira/browse/KAFKA-6275
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Vahid Hashemian
>  Labels: needs-kip
>
> It's useful to have a way to delete the offsets of a consumer group 
> explicitly. The reset tool already supports a number of different ways to 
> alter stored offsets, so perhaps we could add a {{--clear}} option. Note that 
> this would require a change to the OffsetCommit protocol which does not 
> currently support deletion. Perhaps if you commit an offset with a retention 
> time of 0, we can treat it as a deletion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6193) ReassignPartitionsClusterTest.shouldPerformMultipleReassignmentOperationsOverVariousTopics fails sometimes

2017-11-21 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16261232#comment-16261232
 ] 

Vahid Hashemian commented on KAFKA-6193:


Another instance of this failure that happened yesterday is 
[here|https://builds.apache.org/job/kafka-pr-jdk8-scala2.12/9556/testReport/kafka.admin/ReassignPartitionsClusterTest/shouldPerformMultipleReassignmentOperationsOverVariousTopics/].

> ReassignPartitionsClusterTest.shouldPerformMultipleReassignmentOperationsOverVariousTopics
>  fails sometimes
> --
>
> Key: KAFKA-6193
> URL: https://issues.apache.org/jira/browse/KAFKA-6193
> Project: Kafka
>  Issue Type: Test
>Reporter: Ted Yu
> Fix For: 1.1.0, 1.0.1
>
> Attachments: 6193.out
>
>
> From 
> https://builds.apache.org/job/kafka-trunk-jdk8/2198/testReport/junit/kafka.admin/ReassignPartitionsClusterTest/shouldPerformMultipleReassignmentOperationsOverVariousTopics/
>  :
> {code}
> java.lang.AssertionError: expected: but was:
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> kafka.admin.ReassignPartitionsClusterTest.shouldPerformMultipleReassignmentOperationsOverVariousTopics(ReassignPartitionsClusterTest.scala:524)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-4893) async topic deletion conflicts with max topic length

2017-11-21 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-4893:
---
Fix Version/s: 1.1.0

> async topic deletion conflicts with max topic length
> 
>
> Key: KAFKA-4893
> URL: https://issues.apache.org/jira/browse/KAFKA-4893
> Project: Kafka
>  Issue Type: Bug
>Reporter: Onur Karaman
>Assignee: Vahid Hashemian
>Priority: Minor
> Fix For: 1.1.0
>
>
> As per the 
> [documentation|http://kafka.apache.org/documentation/#basic_ops_add_topic], 
> topics can be only 249 characters long to line up with typical filesystem 
> limitations:
> {quote}
> Each sharded partition log is placed into its own folder under the Kafka log 
> directory. The name of such folders consists of the topic name, appended by a 
> dash (\-) and the partition id. Since a typical folder name can not be over 
> 255 characters long, there will be a limitation on the length of topic names. 
> We assume the number of partitions will not ever be above 100,000. Therefore, 
> topic names cannot be longer than 249 characters. This leaves just enough 
> room in the folder name for a dash and a potentially 5 digit long partition 
> id.
> {quote}
> {{kafka.common.Topic.maxNameLength}} is set to 249 and is used during 
> validation.
> This limit ends up not being quite right since topic deletion ends up 
> renaming the directory to the form {{topic-partition.uniqueId-delete}} as can 
> be seen in {{LogManager.asyncDelete}}:
> {code}
> val dirName = new StringBuilder(removedLog.name)
>   .append(".")
>   
> .append(java.util.UUID.randomUUID.toString.replaceAll("-",""))
>   .append(Log.DeleteDirSuffix)
>   .toString()
> {code}
> So the unique id and "-delete" suffix end up hogging some of the characters. 
> Deleting a long-named topic results in a log message such as the following:
> {code}
> kafka.common.KafkaStorageException: Failed to rename log directory from 
> /tmp/kafka-logs0/0-0
>  to 
> /tmp/kafka-logs0/0-0.797bba3fb2464729840f87769243edbb-delete
>   at kafka.log.LogManager.asyncDelete(LogManager.scala:439)
>   at 
> kafka.cluster.Partition$$anonfun$delete$1.apply$mcV$sp(Partition.scala:142)
>   at kafka.cluster.Partition$$anonfun$delete$1.apply(Partition.scala:137)
>   at kafka.cluster.Partition$$anonfun$delete$1.apply(Partition.scala:137)
>   at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:213)
>   at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:221)
>   at kafka.cluster.Partition.delete(Partition.scala:137)
>   at kafka.server.ReplicaManager.stopReplica(ReplicaManager.scala:230)
>   at 
> kafka.server.ReplicaManager$$anonfun$stopReplicas$2.apply(ReplicaManager.scala:260)
>   at 
> kafka.server.ReplicaManager$$anonfun$stopReplicas$2.apply(ReplicaManager.scala:259)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at kafka.server.ReplicaManager.stopReplicas(ReplicaManager.scala:259)
>   at kafka.server.KafkaApis.handleStopReplicaRequest(KafkaApis.scala:174)
>   at kafka.server.KafkaApis.handle(KafkaApis.scala:86)
>   at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:64)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> The topic after this point still exists but has Leader set to -1 and the 
> controller recognizes the topic completion as incomplete (the topic znode is 
> still in /admin/delete_topics).
> I don't believe linkedin has any topic name this long but I'm making the 
> ticket in case anyone runs into this problem.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6219) Inconsistent behavior for kafka-consumer-groups

2017-11-16 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255838#comment-16255838
 ] 

Vahid Hashemian commented on KAFKA-6219:


[~huxi_2b] Yes, it makes sense. This JIRA aims at improving error handling if I 
understood correctly. Thanks for highlighting the distinction.

> Inconsistent behavior for kafka-consumer-groups
> ---
>
> Key: KAFKA-6219
> URL: https://issues.apache.org/jira/browse/KAFKA-6219
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 1.0.0
>Reporter: huxihx
>Assignee: huxihx
>
> For example, when ACL is enabled, running kafka-consumer-groups.sh --describe 
> to describe a group complains:
> `Error: Executing consumer group command failed due to Not authorized to 
> access group: Group authorization failed.`
> However, running kafka-consumer-groups.sh --list otherwise returns nothing, 
> confusing user whether there are no groups at all or something wrong happened.
> In `AdminClient.listAllGroups`, it captures all the possible exceptions and 
> returns an empty List.
> It's better keep those two methods consistent. Does it make any sense?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6219) Inconsistent behavior for kafka-consumer-groups

2017-11-15 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16254849#comment-16254849
 ] 

Vahid Hashemian commented on KAFKA-6219:


[~huxi_2b] Is this possibly a duplicate of 
[KAFKA-5638|https://issues.apache.org/jira/browse/KAFKA-5638]?

> Inconsistent behavior for kafka-consumer-groups
> ---
>
> Key: KAFKA-6219
> URL: https://issues.apache.org/jira/browse/KAFKA-6219
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 1.0.0
>Reporter: huxihx
>Assignee: huxihx
>
> For example, when ACL is enabled, running kafka-consumer-groups.sh --describe 
> to describe a group complains:
> `Error: Executing consumer group command failed due to Not authorized to 
> access group: Group authorization failed.`
> However, running kafka-consumer-groups.sh --list otherwise returns nothing, 
> confusing user whether there are no groups at all or something wrong happened.
> In `AdminClient.listAllGroups`, it captures all the possible exceptions and 
> returns an empty List.
> It's better keep those two methods consistent. Does it make any sense?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-6184) report a metric of the lag between the consumer offset and the start offset of the log

2017-11-11 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian reassigned KAFKA-6184:
--

Assignee: huxihx  (was: Vahid Hashemian)

> report a metric of the lag between the consumer offset and the start offset 
> of the log
> --
>
> Key: KAFKA-6184
> URL: https://issues.apache.org/jira/browse/KAFKA-6184
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 1.0.0
>Reporter: Jun Rao
>Assignee: huxihx
>  Labels: needs-kip
>
> Currently, the consumer reports a metric of the lag between the high 
> watermark of a log and the consumer offset. It will be useful to report a 
> similar lag metric between the consumer offset and the start offset of the 
> log. If this latter lag gets close to 0, it's an indication that the consumer 
> may lose data soon.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6184) report a metric of the lag between the consumer offset and the start offset of the log

2017-11-09 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245342#comment-16245342
 ] 

Vahid Hashemian commented on KAFKA-6184:


[~huxi_2b], no worries. And thanks for offering it back, but you've already 
spent more time on it than I did. Please carry on with it. Thanks :)

> report a metric of the lag between the consumer offset and the start offset 
> of the log
> --
>
> Key: KAFKA-6184
> URL: https://issues.apache.org/jira/browse/KAFKA-6184
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 1.0.0
>Reporter: Jun Rao
>Assignee: Vahid Hashemian
>
> Currently, the consumer reports a metric of the lag between the high 
> watermark of a log and the consumer offset. It will be useful to report a 
> similar lag metric between the consumer offset and the start offset of the 
> log. If this latter lag gets close to 0, it's an indication that the consumer 
> may lose data soon.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6184) report a metric of the lag between the consumer offset and the start offset of the log

2017-11-08 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16244198#comment-16244198
 ] 

Vahid Hashemian commented on KAFKA-6184:


[~huxi_2b] For next time, could you please ask the current assignee of the JIRA 
(and wait a few days in case there is no response) before reassigning to 
yourself? I believe that's the unofficial rule the Kafka community goes by. 
Thanks.

> report a metric of the lag between the consumer offset and the start offset 
> of the log
> --
>
> Key: KAFKA-6184
> URL: https://issues.apache.org/jira/browse/KAFKA-6184
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 1.0.0
>Reporter: Jun Rao
>Assignee: huxihx
>
> Currently, the consumer reports a metric of the lag between the high 
> watermark of a log and the consumer offset. It will be useful to report a 
> similar lag metric between the consumer offset and the start offset of the 
> log. If this latter lag gets close to 0, it's an indication that the consumer 
> may lose data soon.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-6184) report a metric of the lag between the consumer offset and the start offset of the log

2017-11-07 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian reassigned KAFKA-6184:
--

Assignee: Vahid Hashemian

> report a metric of the lag between the consumer offset and the start offset 
> of the log
> --
>
> Key: KAFKA-6184
> URL: https://issues.apache.org/jira/browse/KAFKA-6184
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 1.0.0
>Reporter: Jun Rao
>Assignee: Vahid Hashemian
>
> Currently, the consumer reports a metric of the lag between the high 
> watermark of a log and the consumer offset. It will be useful to report a 
> similar lag metric between the consumer offset and the start offset of the 
> log. If this latter lag gets close to 0, it's an indication that the consumer 
> may lose data soon.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (KAFKA-6158) CONSUMER-ID and HOST values are concatenated if the CONSUMER-ID is > 50 chars

2017-11-07 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16235878#comment-16235878
 ] 

Vahid Hashemian edited comment on KAFKA-6158 at 11/7/17 11:03 PM:
--

Those columns are fixed length. So if the value length is at or above the 
column length this would happen. I'll try to make it look better as part of the 
work I'm doing for 
[KAFKA-5526|https://issues.apache.org/jira/browse/KAFKA-5526].


was (Author: vahid):
Those columns are fixed length. So if the value length is at or above the 
column length this would happen. I'll try to make it look better as part of the 
work I'm doing for 
[KAFKA-4682|https://issues.apache.org/jira/browse/KAFKA-5526].

> CONSUMER-ID and HOST values are concatenated if the CONSUMER-ID is > 50 chars
> -
>
> Key: KAFKA-6158
> URL: https://issues.apache.org/jira/browse/KAFKA-6158
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.11.0.0
>Reporter: Gustav Westling
>Assignee: Vahid Hashemian
>Priority: Trivial
>
> Using the command:
> {noformat}
> ./kafka-consumer-groups.sh --bootstrap-server=localhost:9092 --describe 
> --group foo-group
> {noformat}
> If the CONSUMER-ID is too long the delimiter between CONSUMER-ID and HOST 
> disappears.
> Output:
> {noformat}
> TOPIC  PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG 
>CONSUMER-ID   HOST 
>   CLIENT-ID
> foobar-114 8948049 8948663 614
> default-6697bb36-bf03-46e4-8f3e-4ef987177834-StreamThread-1-consumer-7c0345f5-4806-4957-be26-eb4b3bd6a9dc/10.2.3.40
>  
> default-6697bb36-bf03-46e4-8f3e-4ef987177834-StreamThread-1-consumer
> {noformat}
> Expected output:
> {noformat}
> TOPIC  PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG 
>CONSUMER-ID   HOST 
>   CLIENT-ID
> foobar-114 8948049 8948663 614
> default-6697bb36-bf03-46e4-8f3e-4ef987177834-StreamThread-1-consumer-7c0345f5-4806-4957-be26-eb4b3bd6a9dc
>  /10.2.3.40 
> default-6697bb36-bf03-46e4-8f3e-4ef987177834-StreamThread-1-consumer
> {noformat}
> I suspect that the formatting rules are incorrect 
> https://github.com/apache/kafka/blob/0.11.0/core/src/main/scala/kafka/admin/ConsumerGroupCommand.scala#L137.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-6110) Warning when running the broker on Windows

2017-11-07 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-6110:
---
Fix Version/s: (was: 1.1.0)

> Warning when running the broker on Windows
> --
>
> Key: KAFKA-6110
> URL: https://issues.apache.org/jira/browse/KAFKA-6110
> Project: Kafka
>  Issue Type: Bug
> Environment: Windows 10 VM
>Reporter: Vahid Hashemian
>Priority: Minor
>
> *This issue exists in 1.0.0-RC2.*
> The following warning appears in the broker log at startup:
> {code}
> [2017-10-23 15:29:49,370] WARN Error processing 
> kafka.log:type=LogManager,name=LogDirectoryOffline,logDirectory=C:\tmp\kafka-logs
>  (com.yammer.metrics.reporting.JmxReporter)
> javax.management.MalformedObjectNameException: Invalid character ':' in value 
> part of property
> at javax.management.ObjectName.construct(ObjectName.java:618)
> at javax.management.ObjectName.(ObjectName.java:1382)
> at 
> com.yammer.metrics.reporting.JmxReporter.onMetricAdded(JmxReporter.java:395)
> at 
> com.yammer.metrics.core.MetricsRegistry.notifyMetricAdded(MetricsRegistry.java:516)
> at 
> com.yammer.metrics.core.MetricsRegistry.getOrAdd(MetricsRegistry.java:491)
> at 
> com.yammer.metrics.core.MetricsRegistry.newGauge(MetricsRegistry.java:79)
> at 
> kafka.metrics.KafkaMetricsGroup$class.newGauge(KafkaMetricsGroup.scala:80)
> at kafka.log.LogManager.newGauge(LogManager.scala:50)
> at kafka.log.LogManager$$anonfun$6.apply(LogManager.scala:117)
> at kafka.log.LogManager$$anonfun$6.apply(LogManager.scala:116)
> at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
> at kafka.log.LogManager.(LogManager.scala:116)
> at kafka.log.LogManager$.apply(LogManager.scala:799)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:222)
> at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
> at kafka.Kafka$.main(Kafka.scala:92)
> at kafka.Kafka.main(Kafka.scala)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-6110) Warning when running the broker on Windows

2017-11-07 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian resolved KAFKA-6110.

   Resolution: Duplicate
Fix Version/s: 1.1.0

> Warning when running the broker on Windows
> --
>
> Key: KAFKA-6110
> URL: https://issues.apache.org/jira/browse/KAFKA-6110
> Project: Kafka
>  Issue Type: Bug
> Environment: Windows 10 VM
>Reporter: Vahid Hashemian
>Priority: Minor
> Fix For: 1.1.0
>
>
> *This issue exists in 1.0.0-RC2.*
> The following warning appears in the broker log at startup:
> {code}
> [2017-10-23 15:29:49,370] WARN Error processing 
> kafka.log:type=LogManager,name=LogDirectoryOffline,logDirectory=C:\tmp\kafka-logs
>  (com.yammer.metrics.reporting.JmxReporter)
> javax.management.MalformedObjectNameException: Invalid character ':' in value 
> part of property
> at javax.management.ObjectName.construct(ObjectName.java:618)
> at javax.management.ObjectName.(ObjectName.java:1382)
> at 
> com.yammer.metrics.reporting.JmxReporter.onMetricAdded(JmxReporter.java:395)
> at 
> com.yammer.metrics.core.MetricsRegistry.notifyMetricAdded(MetricsRegistry.java:516)
> at 
> com.yammer.metrics.core.MetricsRegistry.getOrAdd(MetricsRegistry.java:491)
> at 
> com.yammer.metrics.core.MetricsRegistry.newGauge(MetricsRegistry.java:79)
> at 
> kafka.metrics.KafkaMetricsGroup$class.newGauge(KafkaMetricsGroup.scala:80)
> at kafka.log.LogManager.newGauge(LogManager.scala:50)
> at kafka.log.LogManager$$anonfun$6.apply(LogManager.scala:117)
> at kafka.log.LogManager$$anonfun$6.apply(LogManager.scala:116)
> at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
> at kafka.log.LogManager.(LogManager.scala:116)
> at kafka.log.LogManager$.apply(LogManager.scala:799)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:222)
> at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
> at kafka.Kafka$.main(Kafka.scala:92)
> at kafka.Kafka.main(Kafka.scala)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6110) Warning when running the broker on Windows

2017-11-07 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242814#comment-16242814
 ] 

Vahid Hashemian commented on KAFKA-6110:


That's right. Thanks. I'll mark this as duplicate.

> Warning when running the broker on Windows
> --
>
> Key: KAFKA-6110
> URL: https://issues.apache.org/jira/browse/KAFKA-6110
> Project: Kafka
>  Issue Type: Bug
> Environment: Windows 10 VM
>Reporter: Vahid Hashemian
>Priority: Minor
>
> *This issue exists in 1.0.0-RC2.*
> The following warning appears in the broker log at startup:
> {code}
> [2017-10-23 15:29:49,370] WARN Error processing 
> kafka.log:type=LogManager,name=LogDirectoryOffline,logDirectory=C:\tmp\kafka-logs
>  (com.yammer.metrics.reporting.JmxReporter)
> javax.management.MalformedObjectNameException: Invalid character ':' in value 
> part of property
> at javax.management.ObjectName.construct(ObjectName.java:618)
> at javax.management.ObjectName.(ObjectName.java:1382)
> at 
> com.yammer.metrics.reporting.JmxReporter.onMetricAdded(JmxReporter.java:395)
> at 
> com.yammer.metrics.core.MetricsRegistry.notifyMetricAdded(MetricsRegistry.java:516)
> at 
> com.yammer.metrics.core.MetricsRegistry.getOrAdd(MetricsRegistry.java:491)
> at 
> com.yammer.metrics.core.MetricsRegistry.newGauge(MetricsRegistry.java:79)
> at 
> kafka.metrics.KafkaMetricsGroup$class.newGauge(KafkaMetricsGroup.scala:80)
> at kafka.log.LogManager.newGauge(LogManager.scala:50)
> at kafka.log.LogManager$$anonfun$6.apply(LogManager.scala:117)
> at kafka.log.LogManager$$anonfun$6.apply(LogManager.scala:116)
> at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
> at kafka.log.LogManager.(LogManager.scala:116)
> at kafka.log.LogManager$.apply(LogManager.scala:799)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:222)
> at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
> at kafka.Kafka$.main(Kafka.scala:92)
> at kafka.Kafka.main(Kafka.scala)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-4950) ConcurrentModificationException when iterating over Kafka Metrics

2017-11-07 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian reassigned KAFKA-4950:
--

Assignee: Sébastien Launay  (was: Vahid Hashemian)

> ConcurrentModificationException when iterating over Kafka Metrics
> -
>
> Key: KAFKA-4950
> URL: https://issues.apache.org/jira/browse/KAFKA-4950
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.1
>Reporter: Dumitru Postoronca
>Assignee: Sébastien Launay
>Priority: Minor
> Fix For: 0.11.0.2
>
>
> It looks like the when calling {{PartitionStates.partitionSet()}}, while the 
> resulting Hashmap is being built, the internal state of the allocations can 
> change, which leads to ConcurrentModificationException during the copy 
> operation.
> {code}
> java.util.ConcurrentModificationException
> at 
> java.util.LinkedHashMap$LinkedHashIterator.nextNode(LinkedHashMap.java:719)
> at 
> java.util.LinkedHashMap$LinkedKeyIterator.next(LinkedHashMap.java:742)
> at java.util.AbstractCollection.addAll(AbstractCollection.java:343)
> at java.util.HashSet.(HashSet.java:119)
> at 
> org.apache.kafka.common.internals.PartitionStates.partitionSet(PartitionStates.java:66)
> at 
> org.apache.kafka.clients.consumer.internals.SubscriptionState.assignedPartitions(SubscriptionState.java:291)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator$ConsumerCoordinatorMetrics$1.measure(ConsumerCoordinator.java:783)
> at 
> org.apache.kafka.common.metrics.KafkaMetric.value(KafkaMetric.java:61)
> at 
> org.apache.kafka.common.metrics.KafkaMetric.value(KafkaMetric.java:52)
> {code}
> {code}
> // client code:
> import java.util.Collections;
> import java.util.HashMap;
> import java.util.Map;
> import com.codahale.metrics.Gauge;
> import com.codahale.metrics.Metric;
> import com.codahale.metrics.MetricSet;
> import org.apache.kafka.clients.consumer.KafkaConsumer;
> import org.apache.kafka.common.MetricName;
> import static com.codahale.metrics.MetricRegistry.name;
> public class KafkaMetricSet implements MetricSet {
> private final KafkaConsumer client;
> public KafkaMetricSet(KafkaConsumer client) {
> this.client = client;
> }
> @Override
> public Map getMetrics() {
> final Map gauges = new HashMap();
> Map m = client.metrics();
> for (Map.Entry e : 
> m.entrySet()) {
> gauges.put(name(e.getKey().group(), e.getKey().name(), "count"), 
> new Gauge() {
> @Override
> public Double getValue() {
> return e.getValue().value(); // exception thrown here 
> }
> });
> }
> return Collections.unmodifiableMap(gauges);
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6158) CONSUMER-ID and HOST values are concatenated if the CONSUMER-ID is > 50 chars

2017-11-02 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16235992#comment-16235992
 ] 

Vahid Hashemian commented on KAFKA-6158:


Well, the same issue could occur to {{TOPIC}}, {{HOST}}, or {{CLIENT-ID}} 
columns. My thinking is to dynamically expand the column if there is any entry 
that goes beyond the limit.

> CONSUMER-ID and HOST values are concatenated if the CONSUMER-ID is > 50 chars
> -
>
> Key: KAFKA-6158
> URL: https://issues.apache.org/jira/browse/KAFKA-6158
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.11.0.0
>Reporter: Gustav Westling
>Assignee: Vahid Hashemian
>
> Using the command:
> {noformat}
> ./kafka-consumer-groups.sh --bootstrap-server=localhost:9092 --describe 
> --group foo-group
> {noformat}
> If the CONSUMER-ID is too long the delimiter between CONSUMER-ID and HOST 
> disappears.
> Output:
> {noformat}
> TOPIC  PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG 
>CONSUMER-ID   HOST 
>   CLIENT-ID
> foobar-114 8948049 8948663 614
> default-6697bb36-bf03-46e4-8f3e-4ef987177834-StreamThread-1-consumer-7c0345f5-4806-4957-be26-eb4b3bd6a9dc/10.2.3.40
>  
> default-6697bb36-bf03-46e4-8f3e-4ef987177834-StreamThread-1-consumer
> {noformat}
> Expected output:
> {noformat}
> TOPIC  PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG 
>CONSUMER-ID   HOST 
>   CLIENT-ID
> foobar-114 8948049 8948663 614
> default-6697bb36-bf03-46e4-8f3e-4ef987177834-StreamThread-1-consumer-7c0345f5-4806-4957-be26-eb4b3bd6a9dc
>  /10.2.3.40 
> default-6697bb36-bf03-46e4-8f3e-4ef987177834-StreamThread-1-consumer
> {noformat}
> I suspect that the formatting rules are incorrect 
> https://github.com/apache/kafka/blob/0.11.0/core/src/main/scala/kafka/admin/ConsumerGroupCommand.scala#L137.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6158) CONSUMER-ID and HOST values are concatenated if the CONSUMER-ID is > 50 chars

2017-11-02 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16235878#comment-16235878
 ] 

Vahid Hashemian commented on KAFKA-6158:


Those columns are fixed length. So if the value length is at or above the 
column length this would happen. I'll try to make it look better as part of the 
work I'm doing for 
[KAFKA-4682|https://issues.apache.org/jira/browse/KAFKA-5526].

> CONSUMER-ID and HOST values are concatenated if the CONSUMER-ID is > 50 chars
> -
>
> Key: KAFKA-6158
> URL: https://issues.apache.org/jira/browse/KAFKA-6158
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.11.0.0
>Reporter: Gustav Westling
>
> Using the command:
> {noformat}
> ./kafka-consumer-groups.sh --bootstrap-server=localhost:9092 --describe 
> --group foo-group
> {noformat}
> If the CONSUMER-ID is too long the delimiter between CONSUMER-ID and HOST 
> disappears.
> Output:
> {noformat}
> TOPIC  PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG 
>CONSUMER-ID   HOST 
>   CLIENT-ID
> foobar-114 8948049 8948663 614
> default-6697bb36-bf03-46e4-8f3e-4ef987177834-StreamThread-1-consumer-7c0345f5-4806-4957-be26-eb4b3bd6a9dc/10.2.3.40
>  
> default-6697bb36-bf03-46e4-8f3e-4ef987177834-StreamThread-1-consumer
> {noformat}
> Expected output:
> {noformat}
> TOPIC  PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG 
>CONSUMER-ID   HOST 
>   CLIENT-ID
> foobar-114 8948049 8948663 614
> default-6697bb36-bf03-46e4-8f3e-4ef987177834-StreamThread-1-consumer-7c0345f5-4806-4957-be26-eb4b3bd6a9dc
>  /10.2.3.40 
> default-6697bb36-bf03-46e4-8f3e-4ef987177834-StreamThread-1-consumer
> {noformat}
> I suspect that the formatting rules are incorrect 
> https://github.com/apache/kafka/blob/0.11.0/core/src/main/scala/kafka/admin/ConsumerGroupCommand.scala#L137.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-6158) CONSUMER-ID and HOST values are concatenated if the CONSUMER-ID is > 50 chars

2017-11-02 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian reassigned KAFKA-6158:
--

Assignee: Vahid Hashemian

> CONSUMER-ID and HOST values are concatenated if the CONSUMER-ID is > 50 chars
> -
>
> Key: KAFKA-6158
> URL: https://issues.apache.org/jira/browse/KAFKA-6158
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.11.0.0
>Reporter: Gustav Westling
>Assignee: Vahid Hashemian
>
> Using the command:
> {noformat}
> ./kafka-consumer-groups.sh --bootstrap-server=localhost:9092 --describe 
> --group foo-group
> {noformat}
> If the CONSUMER-ID is too long the delimiter between CONSUMER-ID and HOST 
> disappears.
> Output:
> {noformat}
> TOPIC  PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG 
>CONSUMER-ID   HOST 
>   CLIENT-ID
> foobar-114 8948049 8948663 614
> default-6697bb36-bf03-46e4-8f3e-4ef987177834-StreamThread-1-consumer-7c0345f5-4806-4957-be26-eb4b3bd6a9dc/10.2.3.40
>  
> default-6697bb36-bf03-46e4-8f3e-4ef987177834-StreamThread-1-consumer
> {noformat}
> Expected output:
> {noformat}
> TOPIC  PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG 
>CONSUMER-ID   HOST 
>   CLIENT-ID
> foobar-114 8948049 8948663 614
> default-6697bb36-bf03-46e4-8f3e-4ef987177834-StreamThread-1-consumer-7c0345f5-4806-4957-be26-eb4b3bd6a9dc
>  /10.2.3.40 
> default-6697bb36-bf03-46e4-8f3e-4ef987177834-StreamThread-1-consumer
> {noformat}
> I suspect that the formatting rules are incorrect 
> https://github.com/apache/kafka/blob/0.11.0/core/src/main/scala/kafka/admin/ConsumerGroupCommand.scala#L137.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-4682) Committed offsets should not be deleted if a consumer is still active (KIP-211)

2017-10-31 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-4682:
---
Summary: Committed offsets should not be deleted if a consumer is still 
active (KIP-211)  (was: Committed offsets should not be deleted if a consumer 
is still active)

> Committed offsets should not be deleted if a consumer is still active 
> (KIP-211)
> ---
>
> Key: KAFKA-4682
> URL: https://issues.apache.org/jira/browse/KAFKA-4682
> Project: Kafka
>  Issue Type: Bug
>Reporter: James Cheng
>Assignee: Vahid Hashemian
>  Labels: kip
>
> Kafka will delete committed offsets that are older than 
> offsets.retention.minutes
> If there is an active consumer on a low traffic partition, it is possible 
> that Kafka will delete the committed offset for that consumer. Once the 
> offset is deleted, a restart or a rebalance of that consumer will cause the 
> consumer to not find any committed offset and start consuming from 
> earliest/latest (depending on auto.offset.reset). I'm not sure, but a broker 
> failover might also cause you to start reading from auto.offset.reset (due to 
> broker restart, or coordinator failover).
> I think that Kafka should only delete offsets for inactive consumers. The 
> timer should only start after a consumer group goes inactive. For example, if 
> a consumer group goes inactive, then after 1 week, delete the offsets for 
> that consumer group. This is a solution that [~junrao] mentioned in 
> https://issues.apache.org/jira/browse/KAFKA-3806?focusedCommentId=15323521=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15323521
> The current workarounds are to:
> # Commit an offset on every partition you own on a regular basis, making sure 
> that it is more frequent than offsets.retention.minutes (a broker-side 
> setting that a consumer might not be aware of)
> or
> # Turn the value of offsets.retention.minutes up really really high. You have 
> to make sure it is higher than any valid low-traffic rate that you want to 
> support. For example, if you want to support a topic where someone produces 
> once a month, you would have to set offsetes.retention.mintues to 1 month. 
> or
> # Turn on enable.auto.commit (this is essentially #1, but easier to 
> implement).
> None of these are ideal. 
> #1 can be spammy. It requires your consumers know something about how the 
> brokers are configured. Sometimes it is out of your control. Mirrormaker, for 
> example, only commits offsets on partitions where it receives data. And it is 
> duplication that you need to put into all of your consumers.
> #2 has disk-space impact on the broker (in __consumer_offsets) as well as 
> memory-size on the broker (to answer OffsetFetch).
> #3 I think has the potential for message loss (the consumer might commit on 
> messages that are not yet fully processed)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5848) KafkaConsumer should validate topics/TopicPartitions on subscribe/assign

2017-10-25 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-5848:
---
Fix Version/s: 1.1.0

> KafkaConsumer should validate topics/TopicPartitions on subscribe/assign
> 
>
> Key: KAFKA-5848
> URL: https://issues.apache.org/jira/browse/KAFKA-5848
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.11.0.0
>Reporter: Matthias J. Sax
>Assignee: Vahid Hashemian
>Priority: Minor
> Fix For: 1.1.0
>
>
> Currently, {{KafkaConsumer}} checks if the provided topics on {{subscribe()}} 
> and {{TopicPartition}} on {{assign()}} don't contain topic names that are 
> {{null}} or an empty string. 
> However, it could do some more validation:
>  - check if invalid topic characters are in the string (this might be 
> feasible for {Patterns}}, too?)
>  - check if provided partition numbers are valid (ie, not negative and maybe 
> not larger than the available partitions?)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-5848) KafkaConsumer should validate topics/TopicPartitions on subscribe/assign

2017-10-25 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian reassigned KAFKA-5848:
--

Assignee: Vahid Hashemian

> KafkaConsumer should validate topics/TopicPartitions on subscribe/assign
> 
>
> Key: KAFKA-5848
> URL: https://issues.apache.org/jira/browse/KAFKA-5848
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.11.0.0
>Reporter: Matthias J. Sax
>Assignee: Vahid Hashemian
>Priority: Minor
>
> Currently, {{KafkaConsumer}} checks if the provided topics on {{subscribe()}} 
> and {{TopicPartition}} on {{assign()}} don't contain topic names that are 
> {{null}} or an empty string. 
> However, it could do some more validation:
>  - check if invalid topic characters are in the string (this might be 
> feasible for {Patterns}}, too?)
>  - check if provided partition numbers are valid (ie, not negative and maybe 
> not larger than the available partitions?)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-6110) Warning when running the broker on Windows

2017-10-23 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-6110:
---
Description: 
*This issue exists in 1.0.0-RC2.*

The following warning appears in the broker log at startup:
{code}
[2017-10-23 15:29:49,370] WARN Error processing 
kafka.log:type=LogManager,name=LogDirectoryOffline,logDirectory=C:\tmp\kafka-logs
 (com.yammer.metrics.reporting.JmxReporter)
javax.management.MalformedObjectNameException: Invalid character ':' in value 
part of property
at javax.management.ObjectName.construct(ObjectName.java:618)
at javax.management.ObjectName.(ObjectName.java:1382)
at 
com.yammer.metrics.reporting.JmxReporter.onMetricAdded(JmxReporter.java:395)
at 
com.yammer.metrics.core.MetricsRegistry.notifyMetricAdded(MetricsRegistry.java:516)
at 
com.yammer.metrics.core.MetricsRegistry.getOrAdd(MetricsRegistry.java:491)
at 
com.yammer.metrics.core.MetricsRegistry.newGauge(MetricsRegistry.java:79)
at 
kafka.metrics.KafkaMetricsGroup$class.newGauge(KafkaMetricsGroup.scala:80)
at kafka.log.LogManager.newGauge(LogManager.scala:50)
at kafka.log.LogManager$$anonfun$6.apply(LogManager.scala:117)
at kafka.log.LogManager$$anonfun$6.apply(LogManager.scala:116)
at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at kafka.log.LogManager.(LogManager.scala:116)
at kafka.log.LogManager$.apply(LogManager.scala:799)
at kafka.server.KafkaServer.startup(KafkaServer.scala:222)
at 
kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
at kafka.Kafka$.main(Kafka.scala:92)
at kafka.Kafka.main(Kafka.scala)
{code}

  was:
The following warning appears in the broker log at startup:
{code}
[2017-10-23 15:29:49,370] WARN Error processing 
kafka.log:type=LogManager,name=LogDirectoryOffline,logDirectory=C:\tmp\kafka-logs
 (com.yammer.metrics.reporting.JmxReporter)
javax.management.MalformedObjectNameException: Invalid character ':' in value 
part of property
at javax.management.ObjectName.construct(ObjectName.java:618)
at javax.management.ObjectName.(ObjectName.java:1382)
at 
com.yammer.metrics.reporting.JmxReporter.onMetricAdded(JmxReporter.java:395)
at 
com.yammer.metrics.core.MetricsRegistry.notifyMetricAdded(MetricsRegistry.java:516)
at 
com.yammer.metrics.core.MetricsRegistry.getOrAdd(MetricsRegistry.java:491)
at 
com.yammer.metrics.core.MetricsRegistry.newGauge(MetricsRegistry.java:79)
at 
kafka.metrics.KafkaMetricsGroup$class.newGauge(KafkaMetricsGroup.scala:80)
at kafka.log.LogManager.newGauge(LogManager.scala:50)
at kafka.log.LogManager$$anonfun$6.apply(LogManager.scala:117)
at kafka.log.LogManager$$anonfun$6.apply(LogManager.scala:116)
at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at kafka.log.LogManager.(LogManager.scala:116)
at kafka.log.LogManager$.apply(LogManager.scala:799)
at kafka.server.KafkaServer.startup(KafkaServer.scala:222)
at 
kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
at kafka.Kafka$.main(Kafka.scala:92)
at kafka.Kafka.main(Kafka.scala)
{code}


> Warning when running the broker on Windows
> --
>
> Key: KAFKA-6110
> URL: https://issues.apache.org/jira/browse/KAFKA-6110
> Project: Kafka
>  Issue Type: Bug
>Reporter: Vahid Hashemian
>Priority: Minor
>
> *This issue exists in 1.0.0-RC2.*
> The following warning appears in the broker log at startup:
> {code}
> [2017-10-23 15:29:49,370] WARN Error processing 
> kafka.log:type=LogManager,name=LogDirectoryOffline,logDirectory=C:\tmp\kafka-logs
>  (com.yammer.metrics.reporting.JmxReporter)
> javax.management.MalformedObjectNameException: Invalid character ':' in value 
> part of property
> at javax.management.ObjectName.construct(ObjectName.java:618)
> at javax.management.ObjectName.(ObjectName.java:1382)
> at 
> com.yammer.metrics.reporting.JmxReporter.onMetricAdded(JmxReporter.java:395)
> at 
> com.yammer.metrics.core.MetricsRegistry.notifyMetricAdded(MetricsRegistry.java:516)
> at 
> com.yammer.metrics.core.MetricsRegistry.getOrAdd(MetricsRegistry.java:491)
> at 
> com.yammer.metrics.core.MetricsRegistry.newGauge(MetricsRegistry.java:79)
> at 
> kafka.metrics.KafkaMetricsGroup$class.newGauge(KafkaMetricsGroup.scala:80)
> at kafka.log.LogManager.newGauge(LogManager.scala:50)
> at 

[jira] [Updated] (KAFKA-6100) Streams quick start crashes Java on Windows

2017-10-23 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-6100:
---
Attachment: java.exe_171023_115335.dmp.zip

Attached a crash dump created with {{procdump}}.

Here are some basic info from this dump:
{code}
DUMP_CLASS: 2

DUMP_QUALIFIER: 400

CONTEXT:  (.ecxr)
rax=0001 rbx= rcx=0005
rdx= rsi=18c3b428 rdi=00928640
rip=7ffc0b39d658 rsp=18c3b1e0 rbp=0010
 r8=  r9= r10=009359d0
r11= r12=7ffc0b339620 r13=7ffc0b339560
r14= r15=0080
iopl=0 nv up ei pl nz na pe nc
cs=0033  ss=002b  ds=002b  es=002b  fs=0053  gs=002b efl=0202
ucrtbase!invoke_watson+0x18:
7ffc`0b39d658 cd29int 29h
Resetting default scope

FAULTING_IP: 
ucrtbase!invoke_watson+18
7ffc`0b39d658 cd29int 29h

EXCEPTION_RECORD:  (.exr -1)
ExceptionAddress: 7ffc0b39d658 (ucrtbase!invoke_watson+0x0018)
   ExceptionCode: c409 (Security check failure or stack buffer overrun)
  ExceptionFlags: 0001
NumberParameters: 1
   Parameter[0]: 0005
Subcode: 0x5 FAST_FAIL_INVALID_ARG

DEFAULT_BUCKET_ID:  FAIL_FAST_INVALID_ARG

PROCESS_NAME:  java.exe

ERROR_CODE: (NTSTATUS) 0xc409 - The system detected an overrun of a 
stack-based buffer in this application. This overrun could potentially allow a 
malicious user to gain control of this application.

EXCEPTION_CODE: (NTSTATUS) 0xc409 - The system detected an overrun of a 
stack-based buffer in this application. This overrun could potentially allow a 
malicious user to gain control of this application.

EXCEPTION_CODE_STR:  c409

EXCEPTION_PARAMETER1:  0005

WATSON_BKT_PROCSTAMP:  59ba508a

WATSON_BKT_PROCVER:  8.0.1520.16

PROCESS_VER_PRODUCT:  Java(TM) Platform SE 8

WATSON_BKT_MODULE:  ucrtbase.dll

WATSON_BKT_MODSTAMP:  59bf2b6f

WATSON_BKT_MODOFFSET:  6d658

WATSON_BKT_MODVER:  6.2.14393.1770

MODULE_VER_PRODUCT:  MicrosoftÆ WindowsÆ Operating System

BUILD_VERSION_STRING:  10.0.14393.1198 (rs1_release_sec.170427-1353)

MODLIST_WITH_TSCHKSUM_HASH:  a875db61e6293693921cd0f58006b89f200dd909

MODLIST_SHA1_HASH:  81bf4e1fbb4ade1b9d312304478f0499566307cf

COMMENT:  
*** "C:\Users\User\Downloads\Procdump\procdump64.exe" -accepteula -ma -j 
"c:\tmp\dumps" 6088 520 0247
*** Just-In-Time debugger. PID: 6088 Event Handle: 520 JIT Context: .jdinfo 
0x247

NTGLOBALFLAG:  0

PROCESS_BAM_CURRENT_THROTTLED: 0

PROCESS_BAM_PREVIOUS_THROTTLED: 0

APPLICATION_VERIFIER_FLAGS:  0

PRODUCT_TYPE:  1

SUITE_MASK:  272

DUMP_FLAGS:  8000c07

DUMP_TYPE:  3

ANALYSIS_SESSION_HOST:  WINDEV1610EVAL

ANALYSIS_SESSION_TIME:  10-23-2017 11:57:06.0320

ANALYSIS_VERSION: 10.0.16299.15 x86fre

THREAD_ATTRIBUTES: 
OS_LOCALE:  ENU

PROBLEM_CLASSES: 

ID: [0n270]
Type:   [FAIL_FAST]
Class:  Primary
Scope:  DEFAULT_BUCKET_ID (Failure Bucket ID prefix)
BUCKET_ID
Name:   Add
Data:   Omit
PID:[Unspecified]
TID:[Unspecified]
Frame:  [0]

ID: [0n257]
Type:   [INVALID_ARG]
Class:  Addendum
Scope:  DEFAULT_BUCKET_ID (Failure Bucket ID prefix)
BUCKET_ID
Name:   Add
Data:   Omit
PID:[Unspecified]
TID:[Unspecified]
Frame:  [0]

BUGCHECK_STR:  FAIL_FAST_INVALID_ARG

PRIMARY_PROBLEM_CLASS:  FAIL_FAST

LAST_CONTROL_TRANSFER:  from 7ffc0b39d521 to 7ffc0b39d658

STACK_TEXT:  
`18c3b1e0 7ffc`0b39d521 : ` `0010 
`18c3b428 7ffc`0b33be21 : ucrtbase!invoke_watson+0x18
`18c3b210 7ffc`0b39d5f9 : ` 7ffc`0b33a63d 
` `18c3b428 : ucrtbase!invalid_parameter+0x81
`18c3b250 7ffc`0b39751d : `0080 ` 
`0010 `18c3b428 : ucrtbase!invalid_parameter_noinfo+0x9
`18c3b290 7ffb`f18fd150 : `0004 `18c3b428 
`00935b70 `0098 : ucrtbase!aligned_offset_malloc_base+0xa1
`18c3b2c0 7ffb`f18fd082 : `00935b70 `00935b60 
`18c3b4f9 `18c3b428 : 
librocksdbjni4615894589067161782!Java_org_rocksdb_WriteBatchWithIndex_setSavePoint0+0x26120
`18c3b330 7ffb`f18fe909 : `00935b60 `18c3b448 
` `0008 : 
librocksdbjni4615894589067161782!Java_org_rocksdb_WriteBatchWithIndex_setSavePoint0+0x26052
`18c3b3a0 7ffb`f194f19f : `0094ee90 `0080 
7ffb`0004 `0094 : 
librocksdbjni4615894589067161782!Java_org_rocksdb_WriteBatchWithIndex_setSavePoint0+0x278d9
`18c3b410 7ffb`f190db83 : `00936560 7ffb`f1c26140 
`00931790 `00936560 : 

[jira] [Created] (KAFKA-6100) Streams quick start crashes Java on Windows

2017-10-20 Thread Vahid Hashemian (JIRA)
Vahid Hashemian created KAFKA-6100:
--

 Summary: Streams quick start crashes Java on Windows 
 Key: KAFKA-6100
 URL: https://issues.apache.org/jira/browse/KAFKA-6100
 Project: Kafka
  Issue Type: Bug
  Components: streams
 Environment: Windows 10 VM
Reporter: Vahid Hashemian
 Attachments: Screen Shot 2017-10-20 at 11.53.14 AM.png

*This issue was detected in 1.0.0 RC2.*

The following step in streams quick start crashes Java on Windows 10:
{{bin/kafka-run-class.sh 
org.apache.kafka.streams.examples.wordcount.WordCountDemo}}

I tracked this down to [this 
change|https://github.com/apache/kafka/commit/196bcfca0c56420793f85514d1602bde564b0651#diff-6512f838e273b79676cac5f72456127fR67],
 and it seems to new version of RocksDB is to blame.  I tried the quick start 
with the previous version of RocksDB (5.7.3) and did not run into this issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6075) Kafka cannot recover after an unclean shutdown on Windows

2017-10-19 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16212168#comment-16212168
 ] 

Vahid Hashemian commented on KAFKA-6075:


It looks like this issue started since we switched to 
{{Files.deleteIfExists(file.toPath)}} to delete log/index files 
([here|https://github.com/apache/kafka/commit/ab148f39ae64ecbaa84f49c38b3cab8a0a0fd846#diff-ffa8861e850121997a534ebdde2929c6]).

> Kafka cannot recover after an unclean shutdown on Windows
> -
>
> Key: KAFKA-6075
> URL: https://issues.apache.org/jira/browse/KAFKA-6075
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.11.0.1
>Reporter: Vahid Hashemian
>
> An unclean shutdown of broker on Windows cannot be recovered by Kafka. Steps 
> to reproduce from a fresh build:
> # Start zookeeper
> # Start a broker
> # Create a topic {{test}}
> # Do an unclean shutdown of broker (find the process id by {{wmic process 
> where "caption = 'java.exe' and commandline like '%server.properties%'" get 
> processid}}), then kill the process by {{taskkill /pid  /f}}
> # Start the broker again
> This leads to the following errors:
> {code}
> [2017-10-17 17:13:24,819] ERROR Error while loading log dir C:\tmp\kafka-logs 
> (kafka.log.LogManager)
> java.nio.file.FileSystemException: 
> C:\tmp\kafka-logs\test-0\.timeindex: The process cannot 
> access the file because it is being used by another process.
> at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
> at 
> sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
> at 
> sun.nio.fs.AbstractFileSystemProvider.deleteIfExists(AbstractFileSystemProvider.java:108)
> at java.nio.file.Files.deleteIfExists(Files.java:1165)
> at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:333)
> at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:295)
> at 
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
> at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
> at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
> at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
> at kafka.log.Log.loadSegmentFiles(Log.scala:295)
> at kafka.log.Log.loadSegments(Log.scala:404)
> at kafka.log.Log.(Log.scala:201)
> at kafka.log.Log$.apply(Log.scala:1729)
> at 
> kafka.log.LogManager.kafka$log$LogManager$$loadLog(LogManager.scala:221)
> at 
> kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$8$$anonfun$apply$16$$anonfun$apply$2.apply$mcV$sp(LogManager.scala:292)
> at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:61)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> [2017-10-17 17:13:24,819] ERROR Error while deleting the clean shutdown file 
> in dir C:\tmp\kafka-logs (kafka.server.LogDirFailureChannel)
> java.nio.file.FileSystemException: 
> C:\tmp\kafka-logs\test-0\.timeindex: The process cannot 
> access the file because it is being used by another process.
> at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
> at 
> sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
> at 
> sun.nio.fs.AbstractFileSystemProvider.deleteIfExists(AbstractFileSystemProvider.java:108)
> at java.nio.file.Files.deleteIfExists(Files.java:1165)
> at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:333)
> at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:295)
> at 
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
> at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
> at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
> at 
> 

[jira] [Commented] (KAFKA-6091) Authorization API is called hundred's of times when there are no privileges

2017-10-19 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16211873#comment-16211873
 ] 

Vahid Hashemian commented on KAFKA-6091:


Is this a duplicate of 
[KAFKA-5854|https://issues.apache.org/jira/browse/KAFKA-5854]?

> Authorization API is called hundred's of times when there are no privileges
> ---
>
> Key: KAFKA-6091
> URL: https://issues.apache.org/jira/browse/KAFKA-6091
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.11.0.0
>Reporter: kalyan kumar kalvagadda
>
> This issue is observed with kafka/sentry integration. When sentry does not 
> have any permissions for a topic and there is a producer trying to add a 
> message to a topic, sentry returns failure but Kafka is not able to handle it 
> properly and is ending up invoking sentry Auth API ~564 times. This will 
> choke authorization service.
> Here are the list of privileges that are needed for a producer to add a 
> message to a topic
> In this example "192.168.0.3" is hostname and topic name is "tOpIc1"
> {noformat}
> HOST=192.168.0.3->Topic=tOpIc1->action=DESCRIBE
> HOST=192.168.0.3->Cluster=kafka-cluster->action=CREATE
> HOST=192.168.0.3->Topic=tOpIc1->action=WRITE
> {noformat}
> This problem is reported in this jira is seen when there are no permissions. 
> Movement a DESCRIBE permission is added, this issue is not seen. 
> Authorization fails but kafka doesn't bombard with he more requests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-4682) Committed offsets should not be deleted if a consumer is still active

2017-10-18 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16210323#comment-16210323
 ] 

Vahid Hashemian commented on KAFKA-4682:


I just started a KIP discussion for this JIRA. The KIP can be found 
[here|https://cwiki.apache.org/confluence/display/KAFKA/KIP-211%3A+Revise+Expiration+Semantics+of+Consumer+Group+Offsets].

> Committed offsets should not be deleted if a consumer is still active
> -
>
> Key: KAFKA-4682
> URL: https://issues.apache.org/jira/browse/KAFKA-4682
> Project: Kafka
>  Issue Type: Bug
>Reporter: James Cheng
>Assignee: Vahid Hashemian
>  Labels: kip
>
> Kafka will delete committed offsets that are older than 
> offsets.retention.minutes
> If there is an active consumer on a low traffic partition, it is possible 
> that Kafka will delete the committed offset for that consumer. Once the 
> offset is deleted, a restart or a rebalance of that consumer will cause the 
> consumer to not find any committed offset and start consuming from 
> earliest/latest (depending on auto.offset.reset). I'm not sure, but a broker 
> failover might also cause you to start reading from auto.offset.reset (due to 
> broker restart, or coordinator failover).
> I think that Kafka should only delete offsets for inactive consumers. The 
> timer should only start after a consumer group goes inactive. For example, if 
> a consumer group goes inactive, then after 1 week, delete the offsets for 
> that consumer group. This is a solution that [~junrao] mentioned in 
> https://issues.apache.org/jira/browse/KAFKA-3806?focusedCommentId=15323521=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15323521
> The current workarounds are to:
> # Commit an offset on every partition you own on a regular basis, making sure 
> that it is more frequent than offsets.retention.minutes (a broker-side 
> setting that a consumer might not be aware of)
> or
> # Turn the value of offsets.retention.minutes up really really high. You have 
> to make sure it is higher than any valid low-traffic rate that you want to 
> support. For example, if you want to support a topic where someone produces 
> once a month, you would have to set offsetes.retention.mintues to 1 month. 
> or
> # Turn on enable.auto.commit (this is essentially #1, but easier to 
> implement).
> None of these are ideal. 
> #1 can be spammy. It requires your consumers know something about how the 
> brokers are configured. Sometimes it is out of your control. Mirrormaker, for 
> example, only commits offsets on partitions where it receives data. And it is 
> duplication that you need to put into all of your consumers.
> #2 has disk-space impact on the broker (in __consumer_offsets) as well as 
> memory-size on the broker (to answer OffsetFetch).
> #3 I think has the potential for message loss (the consumer might commit on 
> messages that are not yet fully processed)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-4682) Committed offsets should not be deleted if a consumer is still active

2017-10-18 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-4682:
---
Labels: kip  (was: )

> Committed offsets should not be deleted if a consumer is still active
> -
>
> Key: KAFKA-4682
> URL: https://issues.apache.org/jira/browse/KAFKA-4682
> Project: Kafka
>  Issue Type: Bug
>Reporter: James Cheng
>  Labels: kip
>
> Kafka will delete committed offsets that are older than 
> offsets.retention.minutes
> If there is an active consumer on a low traffic partition, it is possible 
> that Kafka will delete the committed offset for that consumer. Once the 
> offset is deleted, a restart or a rebalance of that consumer will cause the 
> consumer to not find any committed offset and start consuming from 
> earliest/latest (depending on auto.offset.reset). I'm not sure, but a broker 
> failover might also cause you to start reading from auto.offset.reset (due to 
> broker restart, or coordinator failover).
> I think that Kafka should only delete offsets for inactive consumers. The 
> timer should only start after a consumer group goes inactive. For example, if 
> a consumer group goes inactive, then after 1 week, delete the offsets for 
> that consumer group. This is a solution that [~junrao] mentioned in 
> https://issues.apache.org/jira/browse/KAFKA-3806?focusedCommentId=15323521=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15323521
> The current workarounds are to:
> # Commit an offset on every partition you own on a regular basis, making sure 
> that it is more frequent than offsets.retention.minutes (a broker-side 
> setting that a consumer might not be aware of)
> or
> # Turn the value of offsets.retention.minutes up really really high. You have 
> to make sure it is higher than any valid low-traffic rate that you want to 
> support. For example, if you want to support a topic where someone produces 
> once a month, you would have to set offsetes.retention.mintues to 1 month. 
> or
> # Turn on enable.auto.commit (this is essentially #1, but easier to 
> implement).
> None of these are ideal. 
> #1 can be spammy. It requires your consumers know something about how the 
> brokers are configured. Sometimes it is out of your control. Mirrormaker, for 
> example, only commits offsets on partitions where it receives data. And it is 
> duplication that you need to put into all of your consumers.
> #2 has disk-space impact on the broker (in __consumer_offsets) as well as 
> memory-size on the broker (to answer OffsetFetch).
> #3 I think has the potential for message loss (the consumer might commit on 
> messages that are not yet fully processed)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-4682) Committed offsets should not be deleted if a consumer is still active

2017-10-18 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian reassigned KAFKA-4682:
--

Assignee: Vahid Hashemian

> Committed offsets should not be deleted if a consumer is still active
> -
>
> Key: KAFKA-4682
> URL: https://issues.apache.org/jira/browse/KAFKA-4682
> Project: Kafka
>  Issue Type: Bug
>Reporter: James Cheng
>Assignee: Vahid Hashemian
>  Labels: kip
>
> Kafka will delete committed offsets that are older than 
> offsets.retention.minutes
> If there is an active consumer on a low traffic partition, it is possible 
> that Kafka will delete the committed offset for that consumer. Once the 
> offset is deleted, a restart or a rebalance of that consumer will cause the 
> consumer to not find any committed offset and start consuming from 
> earliest/latest (depending on auto.offset.reset). I'm not sure, but a broker 
> failover might also cause you to start reading from auto.offset.reset (due to 
> broker restart, or coordinator failover).
> I think that Kafka should only delete offsets for inactive consumers. The 
> timer should only start after a consumer group goes inactive. For example, if 
> a consumer group goes inactive, then after 1 week, delete the offsets for 
> that consumer group. This is a solution that [~junrao] mentioned in 
> https://issues.apache.org/jira/browse/KAFKA-3806?focusedCommentId=15323521=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15323521
> The current workarounds are to:
> # Commit an offset on every partition you own on a regular basis, making sure 
> that it is more frequent than offsets.retention.minutes (a broker-side 
> setting that a consumer might not be aware of)
> or
> # Turn the value of offsets.retention.minutes up really really high. You have 
> to make sure it is higher than any valid low-traffic rate that you want to 
> support. For example, if you want to support a topic where someone produces 
> once a month, you would have to set offsetes.retention.mintues to 1 month. 
> or
> # Turn on enable.auto.commit (this is essentially #1, but easier to 
> implement).
> None of these are ideal. 
> #1 can be spammy. It requires your consumers know something about how the 
> brokers are configured. Sometimes it is out of your control. Mirrormaker, for 
> example, only commits offsets on partitions where it receives data. And it is 
> duplication that you need to put into all of your consumers.
> #2 has disk-space impact on the broker (in __consumer_offsets) as well as 
> memory-size on the broker (to answer OffsetFetch).
> #3 I think has the potential for message loss (the consumer might commit on 
> messages that are not yet fully processed)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6075) Kafka cannot recover after an unclean shutdown on Windows

2017-10-17 Thread Vahid Hashemian (JIRA)
Vahid Hashemian created KAFKA-6075:
--

 Summary: Kafka cannot recover after an unclean shutdown on Windows
 Key: KAFKA-6075
 URL: https://issues.apache.org/jira/browse/KAFKA-6075
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.11.0.1
Reporter: Vahid Hashemian


An unclean shutdown of broker on Windows cannot be recovered by Kafka. Steps to 
reproduce from a fresh build:
# Start zookeeper
# Start a broker
# Create a topic {{test}}
# Do an unclean shutdown of broker (find the process id by {{wmic process where 
"caption = 'java.exe' and commandline like '%server.properties%'" get 
processid}}), then kill the process by {{taskkill /pid  /f}}
# Start the broker again

This leads to the following errors:
{code}
[2017-10-17 17:13:24,819] ERROR Error while loading log dir C:\tmp\kafka-logs 
(kafka.log.LogManager)
java.nio.file.FileSystemException: 
C:\tmp\kafka-logs\test-0\.timeindex: The process cannot 
access the file because it is being used by another process.

at 
sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
at 
sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
at 
sun.nio.fs.AbstractFileSystemProvider.deleteIfExists(AbstractFileSystemProvider.java:108)
at java.nio.file.Files.deleteIfExists(Files.java:1165)
at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:333)
at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:295)
at 
scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
at 
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
at 
scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
at kafka.log.Log.loadSegmentFiles(Log.scala:295)
at kafka.log.Log.loadSegments(Log.scala:404)
at kafka.log.Log.(Log.scala:201)
at kafka.log.Log$.apply(Log.scala:1729)
at 
kafka.log.LogManager.kafka$log$LogManager$$loadLog(LogManager.scala:221)
at 
kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$8$$anonfun$apply$16$$anonfun$apply$2.apply$mcV$sp(LogManager.scala:292)
at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:61)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[2017-10-17 17:13:24,819] ERROR Error while deleting the clean shutdown file in 
dir C:\tmp\kafka-logs (kafka.server.LogDirFailureChannel)
java.nio.file.FileSystemException: 
C:\tmp\kafka-logs\test-0\.timeindex: The process cannot 
access the file because it is being used by another process.

at 
sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
at 
sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
at 
sun.nio.fs.AbstractFileSystemProvider.deleteIfExists(AbstractFileSystemProvider.java:108)
at java.nio.file.Files.deleteIfExists(Files.java:1165)
at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:333)
at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:295)
at 
scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
at 
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
at 
scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
at kafka.log.Log.loadSegmentFiles(Log.scala:295)
at kafka.log.Log.loadSegments(Log.scala:404)
at kafka.log.Log.(Log.scala:201)
at kafka.log.Log$.apply(Log.scala:1729)
at 
kafka.log.LogManager.kafka$log$LogManager$$loadLog(LogManager.scala:221)
at 
kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$8$$anonfun$apply$16$$anonfun$apply$2.apply$mcV$sp(LogManager.scala:292)
at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:61)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at 

[jira] [Commented] (KAFKA-4682) Committed offsets should not be deleted if a consumer is still active

2017-10-12 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202763#comment-16202763
 ] 

Vahid Hashemian commented on KAFKA-4682:


[~hachikuji] I have started drafting a KIP for the changes discussed here. 
Could you please clarify what you mean by
{quote}... we could probably also remove the commit timestamp and use the 
timestamp from the message itself. ...{quote}
I see that the commit timestamp is set to the time the request is processed 
(which supposedly is when the offset is committed). So I'm not clear what you 
mean by "timestamp from the message itself".
Thanks.

> Committed offsets should not be deleted if a consumer is still active
> -
>
> Key: KAFKA-4682
> URL: https://issues.apache.org/jira/browse/KAFKA-4682
> Project: Kafka
>  Issue Type: Bug
>Reporter: James Cheng
>
> Kafka will delete committed offsets that are older than 
> offsets.retention.minutes
> If there is an active consumer on a low traffic partition, it is possible 
> that Kafka will delete the committed offset for that consumer. Once the 
> offset is deleted, a restart or a rebalance of that consumer will cause the 
> consumer to not find any committed offset and start consuming from 
> earliest/latest (depending on auto.offset.reset). I'm not sure, but a broker 
> failover might also cause you to start reading from auto.offset.reset (due to 
> broker restart, or coordinator failover).
> I think that Kafka should only delete offsets for inactive consumers. The 
> timer should only start after a consumer group goes inactive. For example, if 
> a consumer group goes inactive, then after 1 week, delete the offsets for 
> that consumer group. This is a solution that [~junrao] mentioned in 
> https://issues.apache.org/jira/browse/KAFKA-3806?focusedCommentId=15323521=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15323521
> The current workarounds are to:
> # Commit an offset on every partition you own on a regular basis, making sure 
> that it is more frequent than offsets.retention.minutes (a broker-side 
> setting that a consumer might not be aware of)
> or
> # Turn the value of offsets.retention.minutes up really really high. You have 
> to make sure it is higher than any valid low-traffic rate that you want to 
> support. For example, if you want to support a topic where someone produces 
> once a month, you would have to set offsetes.retention.mintues to 1 month. 
> or
> # Turn on enable.auto.commit (this is essentially #1, but easier to 
> implement).
> None of these are ideal. 
> #1 can be spammy. It requires your consumers know something about how the 
> brokers are configured. Sometimes it is out of your control. Mirrormaker, for 
> example, only commits offsets on partitions where it receives data. And it is 
> duplication that you need to put into all of your consumers.
> #2 has disk-space impact on the broker (in __consumer_offsets) as well as 
> memory-size on the broker (to answer OffsetFetch).
> #3 I think has the potential for message loss (the consumer might commit on 
> messages that are not yet fully processed)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-6055) Running tools on Windows fail due to typo in JVM config

2017-10-11 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-6055:
---
Summary: Running tools on Windows fail due to typo in JVM config  (was: 
Running tools on Windows fail due to incorrect JVM config)

> Running tools on Windows fail due to typo in JVM config
> ---
>
> Key: KAFKA-6055
> URL: https://issues.apache.org/jira/browse/KAFKA-6055
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Reporter: Vahid Hashemian
>Assignee: Vahid Hashemian
>Priority: Blocker
> Fix For: 1.0.0
>
>
> This affects the current trunk and 1.0.0 RC0.
> When running any of the Windows commands under {{bin/windows}} the following 
> error is returned:
> {code}
> Missing +/- setting for VM option 'ExplicitGCInvokesConcurrent'
> Error: Could not create the Java Virtual Machine.
> Error: A fatal exception has occurred. Program will exit.
> {code}
> This error points to this JVM configuration in 
> {{bin\windows\kafka-run-class.bat}}: {{-XX:ExplicitGCInvokesConcurrent}}
> The correct config is {{-XX:+ExplicitGCInvokesConcurrent}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-4201) Add an --assignment-strategy option to new-consumer-based Mirror Maker

2017-10-05 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192974#comment-16192974
 ] 

Vahid Hashemian commented on KAFKA-4201:


[~guozhang] Thanks for confirming. I'll start a KIP soon.

> Add an --assignment-strategy option to new-consumer-based Mirror Maker
> --
>
> Key: KAFKA-4201
> URL: https://issues.apache.org/jira/browse/KAFKA-4201
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Vahid Hashemian
>Assignee: Vahid Hashemian
>  Labels: needs-kip
>
> The default assignment strategy in mirror maker will be changed from range to 
> round robin in an upcoming release ([see 
> KAFKA-3818|https://issues.apache.org/jira/browse/KAFKA-3818]). In order to 
> make it easier for users to change the assignment strategy, add an 
> {{--assignment-strategy}} option to Mirror Maker command line tool.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5848) KafkaConsumer should validate topics/TopicPartitions on subscribe/assign

2017-10-04 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192202#comment-16192202
 ] 

Vahid Hashemian commented on KAFKA-5848:


[~lijubjohn] Have you started working on this?

> KafkaConsumer should validate topics/TopicPartitions on subscribe/assign
> 
>
> Key: KAFKA-5848
> URL: https://issues.apache.org/jira/browse/KAFKA-5848
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.11.0.0
>Reporter: Matthias J. Sax
>Priority: Minor
>
> Currently, {{KafkaConsumer}} checks if the provided topics on {{subscribe()}} 
> and {{TopicPartition}} on {{assign()}} don't contain topic names that are 
> {{null}} or an empty string. 
> However, it could do some more validation:
>  - check if invalid topic characters are in the string (this might be 
> feasible for {Patterns}}, too?)
>  - check if provided partition numbers are valid (ie, not negative and maybe 
> not larger than the available partitions?)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-4201) Add an --assignment-strategy option to new-consumer-based Mirror Maker

2017-10-03 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189903#comment-16189903
 ] 

Vahid Hashemian commented on KAFKA-4201:


[~ijuma] It wasn't brought up during an earlier discussion. Should I create 
one? Thanks.

> Add an --assignment-strategy option to new-consumer-based Mirror Maker
> --
>
> Key: KAFKA-4201
> URL: https://issues.apache.org/jira/browse/KAFKA-4201
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Vahid Hashemian
>Assignee: Vahid Hashemian
> Fix For: 1.0.0
>
>
> The default assignment strategy in mirror maker will be changed from range to 
> round robin in an upcoming release ([see 
> KAFKA-3818|https://issues.apache.org/jira/browse/KAFKA-3818]). In order to 
> make it easier for users to change the assignment strategy, add an 
> {{--assignment-strategy}} option to Mirror Maker command line tool.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5999) Offset Fetch Request

2017-10-02 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16188332#comment-16188332
 ] 

Vahid Hashemian commented on KAFKA-5999:


[~zhaoweilong1023]] Could you please elaborate on the issue? The brief 
description above is not very clear. Thanks.

> Offset Fetch Request
> 
>
> Key: KAFKA-5999
> URL: https://issues.apache.org/jira/browse/KAFKA-5999
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Zhao Weilong
>
> New kafka (found in 10.2.1) support new feature for all topic which is put 
> number of topics -1. (v2)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (KAFKA-3465) kafka.tools.ConsumerOffsetChecker won't align with kafka New Consumer mode

2017-09-25 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian closed KAFKA-3465.
--

Closing as the {{ConsumerOffsetChecker}} tool has been removed.

> kafka.tools.ConsumerOffsetChecker won't align with kafka New Consumer mode
> --
>
> Key: KAFKA-3465
> URL: https://issues.apache.org/jira/browse/KAFKA-3465
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.9.0.0, 0.11.0.0
>Reporter: BrianLing
>Assignee: Vahid Hashemian
>Priority: Minor
>
> 1. When we enable mirrorMake to migrate Kafka event from one to other with 
> "new.consumer" mode:
> java -Xmx2G -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 
> -XX:InitiatingHeapOccupancyPercent=35 -XX:+DisableExplicitGC 
> -Djava.awt.headless=true -Dcom.sun.management.jmxremote 
> -Dcom.sun.management.jmxremote.authenticate=false 
> -Dcom.sun.management.jmxremote.ssl=false 
> -Dkafka.logs.dir=/kafka/kafka-app-logs 
> -Dlog4j.configuration=file:/kafka/kafka_2.10-0.9.0.0/bin/../config/tools-log4j.properties
>  -cp :/kafka/kafka_2.10-0.9.0.0/bin/../libs/* 
> -Dkafka.logs.filename=lvs-slca-mm.log kafka.tools.MirrorMaker lvs-slca-mm.log 
> --consumer.config ../config/consumer.properties --new.consumer --num.streams 
> 4 --producer.config ../config/producer-slca.properties --whitelist risk.*
> 2. When we use ConsumerOffzsetChecker tool, notice the lag won't changed and 
> the owner is none.
> bin/kafka-run-class.sh  kafka.tools.ConsumerOffsetChecker --broker-info 
> --group lvs.slca.mirrormaker --zookeeper lvsdmetlvm01.lvs.paypal.com:2181 
> --topic 
> Group   Topic  Pid Offset  logSize
>  Lag Owner
> lvs.slca.mirrormaker   0   418578332   418678347   100015 
>  none
> lvs.slca.mirrormaker  1   418598026   418698338   100312  
> none
> [Root Cause]
> I think it's due to 0.9.0 new feature to switch zookeeper dependency to kafka 
> internal to store offset & consumer owner information. 
>   Does it mean we can not use the below command to check new consumer’s 
> lag since current lag formula: lag= logSize – offset 
> https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/tools/ConsumerOffsetChecker.scala#L80
>   
> https://github.com/apache/kafka/blob/0.9.0/core/src/main/scala/kafka/tools/ConsumerOffsetChecker.scala#L174-L182
>  => offSet Fetch from zookeeper instead of from Kafka



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-5944) Add unit tests for handling of authentication failures in clients

2017-09-22 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian reassigned KAFKA-5944:
--

Assignee: Vahid Hashemian

> Add unit tests for handling of authentication failures in clients
> -
>
> Key: KAFKA-5944
> URL: https://issues.apache.org/jira/browse/KAFKA-5944
> Project: Kafka
>  Issue Type: Test
>  Components: clients
>Reporter: Rajini Sivaram
>Assignee: Vahid Hashemian
> Fix For: 1.0.0
>
>
> KAFKA-5854 improves authentication failures in clients and has added 
> integration tests and some basic client-side tests that create actual 
> connections to a mock server. It will be good to add a set of tests for 
> producers, consumers etc. that use MockClient to add more extensive tests for 
> various scenarios.
> cc [~hachikuji] [~vahid]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-4860) Kafka batch files does not support path with spaces

2017-09-01 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-4860:
---
Fix Version/s: 1.0.0

> Kafka batch files does not support path with spaces
> ---
>
> Key: KAFKA-4860
> URL: https://issues.apache.org/jira/browse/KAFKA-4860
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.2.0
> Environment: windows
>Reporter: Vladimír Kleštinec
>Priority: Minor
> Fix For: 1.0.0
>
>
> When we install kafka on windows to path that contains spaces e.g. C:\Program 
> Files\ApacheKafkabatch files located in bin/windows don't work.
> Workaround: install on path without spaces



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-4893) async topic deletion conflicts with max topic length

2017-08-18 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16133561#comment-16133561
 ] 

Vahid Hashemian commented on KAFKA-4893:


[~onurkaraman] Agreed. I'd be happy to work on your suggested solution. I am 
wondering if a KIP is required. Also, it would be great if we can have some 
feedback from one of the committers. cc [~ijuma] [~hachikuji] [~ewencp]

> async topic deletion conflicts with max topic length
> 
>
> Key: KAFKA-4893
> URL: https://issues.apache.org/jira/browse/KAFKA-4893
> Project: Kafka
>  Issue Type: Bug
>Reporter: Onur Karaman
>Assignee: Vahid Hashemian
>Priority: Minor
>
> As per the 
> [documentation|http://kafka.apache.org/documentation/#basic_ops_add_topic], 
> topics can be only 249 characters long to line up with typical filesystem 
> limitations:
> {quote}
> Each sharded partition log is placed into its own folder under the Kafka log 
> directory. The name of such folders consists of the topic name, appended by a 
> dash (\-) and the partition id. Since a typical folder name can not be over 
> 255 characters long, there will be a limitation on the length of topic names. 
> We assume the number of partitions will not ever be above 100,000. Therefore, 
> topic names cannot be longer than 249 characters. This leaves just enough 
> room in the folder name for a dash and a potentially 5 digit long partition 
> id.
> {quote}
> {{kafka.common.Topic.maxNameLength}} is set to 249 and is used during 
> validation.
> This limit ends up not being quite right since topic deletion ends up 
> renaming the directory to the form {{topic-partition.uniqueId-delete}} as can 
> be seen in {{LogManager.asyncDelete}}:
> {code}
> val dirName = new StringBuilder(removedLog.name)
>   .append(".")
>   
> .append(java.util.UUID.randomUUID.toString.replaceAll("-",""))
>   .append(Log.DeleteDirSuffix)
>   .toString()
> {code}
> So the unique id and "-delete" suffix end up hogging some of the characters. 
> Deleting a long-named topic results in a log message such as the following:
> {code}
> kafka.common.KafkaStorageException: Failed to rename log directory from 
> /tmp/kafka-logs0/0-0
>  to 
> /tmp/kafka-logs0/0-0.797bba3fb2464729840f87769243edbb-delete
>   at kafka.log.LogManager.asyncDelete(LogManager.scala:439)
>   at 
> kafka.cluster.Partition$$anonfun$delete$1.apply$mcV$sp(Partition.scala:142)
>   at kafka.cluster.Partition$$anonfun$delete$1.apply(Partition.scala:137)
>   at kafka.cluster.Partition$$anonfun$delete$1.apply(Partition.scala:137)
>   at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:213)
>   at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:221)
>   at kafka.cluster.Partition.delete(Partition.scala:137)
>   at kafka.server.ReplicaManager.stopReplica(ReplicaManager.scala:230)
>   at 
> kafka.server.ReplicaManager$$anonfun$stopReplicas$2.apply(ReplicaManager.scala:260)
>   at 
> kafka.server.ReplicaManager$$anonfun$stopReplicas$2.apply(ReplicaManager.scala:259)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at kafka.server.ReplicaManager.stopReplicas(ReplicaManager.scala:259)
>   at kafka.server.KafkaApis.handleStopReplicaRequest(KafkaApis.scala:174)
>   at kafka.server.KafkaApis.handle(KafkaApis.scala:86)
>   at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:64)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> The topic after this point still exists but has Leader set to -1 and the 
> controller recognizes the topic completion as incomplete (the topic znode is 
> still in /admin/delete_topics).
> I don't believe linkedin has any topic name this long but I'm making the 
> ticket in case anyone runs into this problem.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (KAFKA-4682) Committed offsets should not be deleted if a consumer is still active

2017-08-11 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16123742#comment-16123742
 ] 

Vahid Hashemian edited comment on KAFKA-4682 at 8/11/17 5:56 PM:
-

[~hachikuji] Thank you for your comments. You seem to be looking at this with 
an inclination to get rid of the retention time from the OffsetCommit protocol. 
I think with my comments below I'm considering the alternative:

# Ewen's KIP proposes to increase the default retention from 1 day to 7 days. 
So, allowing consumers to set a lower timeout (for the console consumer) seems 
to be helpful after his KIP; the same way allowing them to set a higher timeout 
(for actual consumer applications) is helpful before his KIP.
# Even if we have offset-level expiration, all offsets in the group should 
expire together, because the expiration timer starts ticking for all partitions 
at the same time (when the group becomes empty). The only exception is when a 
consumer has set a non-default retention time for particular partitions (e.g. 
using the OffsetCommit API).
# Agreed. The expiration timestamp won't make sense. Perhaps the retention time 
should be stored and whether to expire or not could be calculated on the fly 
from the time group becomes empty + retention time (we would need to somehow 
keep the timestamp of the group becoming empty). This expiration check needs to 
be performed only if the group is empty; otherwise there is no need to expire 
at all.
# I don't have a strong feeling about this. It's for sure simpler to let all 
offsets expire at the same time. And if we keep the individual offset retention 
it would be easier to change this in case the cache size becomes an issue.

I think there is a risk involved in removing the individual retention from the 
protocol: could some requirement arise in the future that makes us bring it 
back to the protocol? One option is to let that field stay for now, and remove 
it later once we are more certain that it won't be needed back.


was (Author: vahid):
[~hachikuji] Thank you for your comments. You seem to be looking at this with 
an inclination to get rid of the retention time from the OffsetCommit protocol. 
I think with my comments below I'm considering the alternative:

# Ewen's KIP proposes to increase the default retention from 1 day to 7 days. 
So, allowing consumers to set a lower timeout (for the console consumer) seems 
to be helpful after his KIP; the same way allowing them to set a higher timeout 
(for actual consumer applications) is helpful before his KIP.

# Even if we have offset-level expiration, all offsets in the group should 
expire together, because the expiration timer starts ticking for all partitions 
at the same time (when the group becomes empty). The only exception is when a 
consumer has set a non-default retention time for particular partitions (e.g. 
using the OffsetCommit API).

# Agreed. The expiration timestamp won't make sense. Perhaps the retention time 
should be stored and whether to expire or not could be calculated on the fly 
from the time group becomes empty + retention time (we would need to somehow 
keep the timestamp of the group becoming empty). This expiration check needs to 
be performed only if the group is empty; otherwise there is no need to expire 
at all.

# I don't have a strong feeling about this. It's for sure simpler to let all 
offsets expire at the same time. And if we keep the individual offset retention 
it would be easier to change this in case the cache size becomes an issue.

I think there is a risk involved in removing the individual retention from the 
protocol: could some requirement arise in the future that makes us bring it 
back to the protocol? One option is to let that field stay for now, and remove 
it later once we are more certain that it won't be needed back.

> Committed offsets should not be deleted if a consumer is still active
> -
>
> Key: KAFKA-4682
> URL: https://issues.apache.org/jira/browse/KAFKA-4682
> Project: Kafka
>  Issue Type: Bug
>Reporter: James Cheng
>
> Kafka will delete committed offsets that are older than 
> offsets.retention.minutes
> If there is an active consumer on a low traffic partition, it is possible 
> that Kafka will delete the committed offset for that consumer. Once the 
> offset is deleted, a restart or a rebalance of that consumer will cause the 
> consumer to not find any committed offset and start consuming from 
> earliest/latest (depending on auto.offset.reset). I'm not sure, but a broker 
> failover might also cause you to start reading from auto.offset.reset (due to 
> broker restart, or coordinator failover).
> I think that Kafka should only delete offsets for inactive consumers. The 
> timer should only start after a consumer 

[jira] [Commented] (KAFKA-4682) Committed offsets should not be deleted if a consumer is still active

2017-08-10 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122498#comment-16122498
 ] 

Vahid Hashemian commented on KAFKA-4682:


[~wushujames] Thanks for your feedback. Regarding the other details you brought 
up:

. [~hachikuji]'s suggestion on 
[KIP-186|https://cwiki.apache.org/confluence/display/KAFKA/KIP-186%3A+Increase+offsets+retention+default+to+7+days]
 makes sense to me. The {{OffsetCommit}} API can be used to override the 
default broker level property {{offset.retention.minutes}} for specific 
group/topic/partitions. This means we probably wouldn't need to have a 
group-level retention config. What a potential KIP for this JIRA would be 
adding is that the retention timer kicks off at the moment the group becomes 
empty, and while the group is stable no offset will be removed (as retention 
timer is not ticking yet).
. Regarding your second point, I guess we could pick either method. It all 
would depend on the criteria for triggering the retention timer for a 
partition. If we trigger it when the group is empty (as in the previous bullet) 
then we would be expiring the offset for {{B-0}} with all other group 
partitions. If, on the other hand, we decide to trigger the timer when the 
partition stops being consumed within the group, then {{B-0}}'s offset could 
expire while the group is still active. I'm not sure how common this scenario 
is in real applications. If it's not that common perhaps it wouldn't cost a lot 
to keep {{B-0}}'s offsets around with the rest of the group. In any case, we 
should be able to pick one approach or the other depending on what you and 
others believe is more reasonable.

What do you think? [~hachikuji], what are your thoughts on this?

> Committed offsets should not be deleted if a consumer is still active
> -
>
> Key: KAFKA-4682
> URL: https://issues.apache.org/jira/browse/KAFKA-4682
> Project: Kafka
>  Issue Type: Bug
>Reporter: James Cheng
>
> Kafka will delete committed offsets that are older than 
> offsets.retention.minutes
> If there is an active consumer on a low traffic partition, it is possible 
> that Kafka will delete the committed offset for that consumer. Once the 
> offset is deleted, a restart or a rebalance of that consumer will cause the 
> consumer to not find any committed offset and start consuming from 
> earliest/latest (depending on auto.offset.reset). I'm not sure, but a broker 
> failover might also cause you to start reading from auto.offset.reset (due to 
> broker restart, or coordinator failover).
> I think that Kafka should only delete offsets for inactive consumers. The 
> timer should only start after a consumer group goes inactive. For example, if 
> a consumer group goes inactive, then after 1 week, delete the offsets for 
> that consumer group. This is a solution that [~junrao] mentioned in 
> https://issues.apache.org/jira/browse/KAFKA-3806?focusedCommentId=15323521=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15323521
> The current workarounds are to:
> # Commit an offset on every partition you own on a regular basis, making sure 
> that it is more frequent than offsets.retention.minutes (a broker-side 
> setting that a consumer might not be aware of)
> or
> # Turn the value of offsets.retention.minutes up really really high. You have 
> to make sure it is higher than any valid low-traffic rate that you want to 
> support. For example, if you want to support a topic where someone produces 
> once a month, you would have to set offsetes.retention.mintues to 1 month. 
> or
> # Turn on enable.auto.commit (this is essentially #1, but easier to 
> implement).
> None of these are ideal. 
> #1 can be spammy. It requires your consumers know something about how the 
> brokers are configured. Sometimes it is out of your control. Mirrormaker, for 
> example, only commits offsets on partitions where it receives data. And it is 
> duplication that you need to put into all of your consumers.
> #2 has disk-space impact on the broker (in __consumer_offsets) as well as 
> memory-size on the broker (to answer OffsetFetch).
> #3 I think has the potential for message loss (the consumer might commit on 
> messages that are not yet fully processed)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (KAFKA-5638) Inconsistency in consumer group related ACLs

2017-08-03 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1611#comment-1611
 ] 

Vahid Hashemian edited comment on KAFKA-5638 at 8/3/17 7:50 PM:


The current usage is probably not incorrect, because the implication you 
mentioned makes sense. However, it is inconsistent. I also don't know of any 
other inferred permission like this one. That's the reason I raised the issue. 
Unless there is a big push back, I would like to take the KIP approach and fix 
this inconsistency by dropping the {{Describe(Cluster)}} check from the API and 
introducing a {{Describe(Group)}} permission requirement. If there is push 
back, we can do the latter only and implement what you suggested above. If you 
are okay with this approach I'll start drafting the KIP.


was (Author: vahid):
The current usage is probably not incorrect, because the implication you 
mentioned makes sense. However, it is inconsistent. I also don't know of any 
other inferred permission like this one. That's the reason I raised the issue. 
Unless there is a big push back, I would like to take the KIP approach and fix 
this inconsistency by dropping the {{Describe(Cluster)}} check from the API and 
introducing a {{Describe(Group)}} group requirement. If there is push back, we 
can do the latter only and implement what you suggested above. If you are okay 
with this approach I'll start drafting the KIP.

> Inconsistency in consumer group related ACLs
> 
>
> Key: KAFKA-5638
> URL: https://issues.apache.org/jira/browse/KAFKA-5638
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.11.0.0
>Reporter: Vahid Hashemian
>Assignee: Vahid Hashemian
>Priority: Minor
>  Labels: needs-kip
>
> Users can see all groups in the cluster (using consumer group’s {{--list}} 
> option) provided that they have {{Describe}} access to the cluster. It would 
> make more sense to modify that experience and limit what is listed in the 
> output to only those groups they have {{Describe}} access to. The reason is, 
> almost everything else is accessible by a user only if the access is 
> specifically granted (through ACL {{--add}}); and this scenario should not be 
> an exception. The potential change would be updating the minimum required 
> permission of {{ListGroup}} from {{Describe (Cluster)}} to {{Describe 
> (Group)}}.
> We can also look at this issue from a different angle: A user with {{Read}} 
> access to a group can describe the group, but the same user would not see 
> anything when listing groups (assuming there is no {{Describe}} access to the 
> cluster). It makes more sense for this user to be able to list all groups 
> s/he can already describe.
> It would be great to know if any user is relying on the existing behavior 
> (listing all consumer groups using a {{Describe (Cluster)}} ACL).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5638) Inconsistency in consumer group related ACLs

2017-08-03 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1611#comment-1611
 ] 

Vahid Hashemian commented on KAFKA-5638:


The current usage is probably not incorrect, because the implication you 
mentioned makes sense. However, it is inconsistent. I also don't know of any 
other inferred permission like this one. That's the reason I raised the issue. 
Unless there is a big push back, I would like to take the KIP approach and fix 
this inconsistency by dropping the {{Describe(Cluster)}} check from the API and 
introducing a {{Describe(Group)}} group requirement. If there is push back, we 
can do the latter only and implement what you suggested above. If you are okay 
with this approach I'll start drafting the KIP.

> Inconsistency in consumer group related ACLs
> 
>
> Key: KAFKA-5638
> URL: https://issues.apache.org/jira/browse/KAFKA-5638
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.11.0.0
>Reporter: Vahid Hashemian
>Assignee: Vahid Hashemian
>Priority: Minor
>  Labels: needs-kip
>
> Users can see all groups in the cluster (using consumer group’s {{--list}} 
> option) provided that they have {{Describe}} access to the cluster. It would 
> make more sense to modify that experience and limit what is listed in the 
> output to only those groups they have {{Describe}} access to. The reason is, 
> almost everything else is accessible by a user only if the access is 
> specifically granted (through ACL {{--add}}); and this scenario should not be 
> an exception. The potential change would be updating the minimum required 
> permission of {{ListGroup}} from {{Describe (Cluster)}} to {{Describe 
> (Group)}}.
> We can also look at this issue from a different angle: A user with {{Read}} 
> access to a group can describe the group, but the same user would not see 
> anything when listing groups (assuming there is no {{Describe}} access to the 
> cluster). It makes more sense for this user to be able to list all groups 
> s/he can already describe.
> It would be great to know if any user is relying on the existing behavior 
> (listing all consumer groups using a {{Describe (Cluster)}} ACL).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (KAFKA-5638) Inconsistency in consumer group related ACLs

2017-08-01 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16109479#comment-16109479
 ] 

Vahid Hashemian edited comment on KAFKA-5638 at 8/1/17 6:38 PM:


[~hachikuji] That should work too, and give us backward compatibility (is this 
why we would reject my suggested substitution?). I'm not sure why the required 
ACL was set to {{Describe(Cluster)}} in the first place. With your suggested 
extension I still think the inconsistency is still there, so in the long run it 
would make sense to get rid of the required cluster-level ACL (unless there is 
a sound logic behind it).

I assume extending the API would still require a KIP.


was (Author: vahid):
[~hachikuji] That should work too, and give us backward compatibility (is this 
why we would reject my suggested substitution). I'm not sure why the required 
ACL was set to {{Describe(Cluster)}} in the first place. With your suggested 
extension I still think the inconsistency is still there, so in the long run it 
would make sense to get rid of the required cluster-level ACL (unless there is 
a sound logic behind it).


> Inconsistency in consumer group related ACLs
> 
>
> Key: KAFKA-5638
> URL: https://issues.apache.org/jira/browse/KAFKA-5638
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.11.0.0
>Reporter: Vahid Hashemian
>Assignee: Vahid Hashemian
>Priority: Minor
>  Labels: needs-kip
>
> Users can see all groups in the cluster (using consumer group’s {{--list}} 
> option) provided that they have {{Describe}} access to the cluster. It would 
> make more sense to modify that experience and limit what is listed in the 
> output to only those groups they have {{Describe}} access to. The reason is, 
> almost everything else is accessible by a user only if the access is 
> specifically granted (through ACL {{--add}}); and this scenario should not be 
> an exception. The potential change would be updating the minimum required 
> permission of {{ListGroup}} from {{Describe (Cluster)}} to {{Describe 
> (Group)}}.
> We can also look at this issue from a different angle: A user with {{Read}} 
> access to a group can describe the group, but the same user would not see 
> anything when listing groups (assuming there is no {{Describe}} access to the 
> cluster). It makes more sense for this user to be able to list all groups 
> s/he can already describe.
> It would be great to know if any user is relying on the existing behavior 
> (listing all consumer groups using a {{Describe (Cluster)}} ACL).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-5664) Disable auto offset commit in ConsoleConsumer if no group is provided

2017-07-26 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian reassigned KAFKA-5664:
--

Assignee: Vahid Hashemian

> Disable auto offset commit in ConsoleConsumer if no group is provided
> -
>
> Key: KAFKA-5664
> URL: https://issues.apache.org/jira/browse/KAFKA-5664
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Vahid Hashemian
>
> In ConsoleCosnumer, if no group is provided, we generate a random groupId:
> {code}
> consumerProps.put(ConsumerConfig.GROUP_ID_CONFIG, s"console-consumer-${new 
> Random().nextInt(10)}")
> {code}
> In this case, since the group is not likely to be used again, we should 
> disable automatic offset commits. This avoids polluting the coordinator cache 
> with offsets that will never be used.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5639) Enhance DescribeGroups protocol to include additional group information

2017-07-25 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-5639:
---
Description: 
The 
[{{DescribeGroups}}|https://kafka.apache.org/protocol#The_Messages_DescribeGroups]
 protocol v1 currently returns this information for each consumer group:
* {{error_code}}
* {{group_id}}
* {{state}}
* {{protocol_type}}
* {{protocol}}
* {{members}}

There are additional info in a {{GroupMetadata}} object on the server side, 
some of which could be useful if exposed via the {{DescribeGroups}} API. Here 
are some examples:
* {{generationId}}
* {{leaderId}}
* {{numOffsets}}
* {{hasOffsets}}

Enhancing the protocol with this additional info means improving the existing 
tools that make use of it. For example, using this additional info, the 
consumer group command's {{\-\-describe}} output will provide more information 
about each consumer group to help with its monitoring / troubleshooting / 

  was:
The 
[{{DescribeGroups}}|https://kafka.apache.org/protocol#The_Messages_DescribeGroups]
 API v1 currently returns this information for each consumer group:
* {{error_code}}
* {{group_id}}
* {{state}}
* {{protocol_type}}
* {{protocol}}
* {{members}}

There are additional info in a {{GroupMetadata}} object on the server side, 
some of which could be useful if exposed via the {{DescribeGroups}} API. Here 
are some examples:
* {{generationId}}
* {{leaderId}}
* {{numOffsets}}
* {{hasOffsets}}

Enhancing the API with this additional info means improving the existing tools 
that make use of the API. For example, using this additional info, the consumer 
group command's {{\-\-describe}} output will provide more information about 
each consumer group to help with its monitoring / troubleshooting / 

Summary: Enhance DescribeGroups protocol to include additional group 
information  (was: Enhance DescribeGroups API to return additional group 
information)

> Enhance DescribeGroups protocol to include additional group information
> ---
>
> Key: KAFKA-5639
> URL: https://issues.apache.org/jira/browse/KAFKA-5639
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Reporter: Vahid Hashemian
>Assignee: Vahid Hashemian
>Priority: Minor
>  Labels: needs-kip
>
> The 
> [{{DescribeGroups}}|https://kafka.apache.org/protocol#The_Messages_DescribeGroups]
>  protocol v1 currently returns this information for each consumer group:
> * {{error_code}}
> * {{group_id}}
> * {{state}}
> * {{protocol_type}}
> * {{protocol}}
> * {{members}}
> There are additional info in a {{GroupMetadata}} object on the server side, 
> some of which could be useful if exposed via the {{DescribeGroups}} API. Here 
> are some examples:
> * {{generationId}}
> * {{leaderId}}
> * {{numOffsets}}
> * {{hasOffsets}}
> Enhancing the protocol with this additional info means improving the existing 
> tools that make use of it. For example, using this additional info, the 
> consumer group command's {{\-\-describe}} output will provide more 
> information about each consumer group to help with its monitoring / 
> troubleshooting / 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (KAFKA-3356) Remove ConsumerOffsetChecker, deprecated in 0.9, in 0.11

2017-07-20 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078812#comment-16078812
 ] 

Vahid Hashemian edited comment on KAFKA-3356 at 7/20/17 10:25 PM:
--

[~ijuma] [~hachikuji] If {{ConsumerGroupCommand}} needs to provide all existing 
{{ConsumerOffsetChecker}} features I could start a KIP (assuming one is 
required) to cover what [~mimaison] reported as missing:
* Listing all brokers
* Filtering results by a given topic list


was (Author: vahid):
[~ijuma] If {{ConsumerGroupCommand}} needs to provide all existing 
{{ConsumerOffsetChecker}} features I could start a KIP (assuming one is 
required) to cover what [~mimaison] reported as missing:
* Listing all brokers
* Filtering results by a given topic list

> Remove ConsumerOffsetChecker, deprecated in 0.9, in 0.11
> 
>
> Key: KAFKA-3356
> URL: https://issues.apache.org/jira/browse/KAFKA-3356
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.10.2.0
>Reporter: Ashish Singh
>Assignee: Mickael Maison
>Priority: Blocker
> Fix For: 0.12.0.0
>
>
> ConsumerOffsetChecker is marked deprecated as of 0.9, should be removed in 
> 0.11.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5588) ConsumerConsole : uselss --new-consumer option

2017-07-13 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086040#comment-16086040
 ] 

Vahid Hashemian commented on KAFKA-5588:


If I'm not mistaken this requires a KIP due to potential impact to existing 
users.

> ConsumerConsole : uselss --new-consumer option
> --
>
> Key: KAFKA-5588
> URL: https://issues.apache.org/jira/browse/KAFKA-5588
> Project: Kafka
>  Issue Type: Bug
>Reporter: Paolo Patierno
>Assignee: Paolo Patierno
>Priority: Minor
>
> Hi,
> it seems to me that the --new-consumer option on the ConsoleConsumer is 
> useless.
> The useOldConsumer var is related to specify --zookeeper on the command line 
> but then the bootstrap-server option (or the --new-consumer) can't be 
> used.
> If you use --bootstrap-server option then the new consumer is used 
> automatically so no need for --new-consumer.
> It turns out the using the old or new consumer is just related on using 
> --zookeeper or --bootstrap-server option (which can't be used together, so I 
> can't use new consumer connecting to zookeeper).
> It's also clear when you use --zookeeper for the old consumer and the output 
> from help says :
> "Consider using the new consumer by passing [bootstrap-server] instead of 
> [zookeeper]"
> I'm going to remove the --new-consumer option from the tool.
> Thanks,
> Paolo.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5526) KIP-175: ConsumerGroupCommand no longer shows output for consumer groups which have not committed offsets

2017-07-12 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084533#comment-16084533
 ] 

Vahid Hashemian commented on KAFKA-5526:


[~hachikuji] I'd appreciate it if you could take a look at the 
[KIP|https://cwiki.apache.org/confluence/display/KAFKA/KIP-175%3A+Additional+%27--describe%27+views+for+ConsumerGroupCommand]
 and share your opinion. Thanks.

> KIP-175: ConsumerGroupCommand no longer shows output for consumer groups 
> which have not committed offsets
> -
>
> Key: KAFKA-5526
> URL: https://issues.apache.org/jira/browse/KAFKA-5526
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ryan P
>Assignee: Vahid Hashemian
>  Labels: kip
>
> It would appear that the latest iteration of the ConsumerGroupCommand no 
> longer outputs information about group membership when no offsets have been 
> committed. It would be nice if the output generated by these tools maintained 
> some form of consistency across versions as some users have grown to depend 
> on them. 
> 0.9.x output:
> bin/kafka-consumer-groups --bootstrap-server localhost:9092 --new-consumer 
> --describe --group console-consumer-34885
> GROUP, TOPIC, PARTITION, CURRENT OFFSET, LOG END OFFSET, LAG, OWNER
> console-consumer-34885, test, 0, unknown, 0, unknown, consumer-1_/192.168.1.64
> 0.10.2 output:
> bin/kafka-consumer-groups --bootstrap-server localhost:9092 --new-consumer 
> --describe --group console-consumer-34885
> Note: This will only show information about consumers that use the Java 
> consumer API (non-ZooKeeper-based consumers).
> TOPIC  PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG 
>CONSUMER-ID   HOST 
>   CLIENT-ID



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-4931) stop script fails due 4096 ps output limit

2017-07-10 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16080885#comment-16080885
 ] 

Vahid Hashemian commented on KAFKA-4931:


[~tombentley] What exactly has the 4096 character limit? Is it the output of 
{{ps ax}}? Or something else?
For me in an Ubuntu machine, the output of {{ps ax}} is over 50K long, and the 
stop script works fine.

> stop script fails due 4096 ps output limit
> --
>
> Key: KAFKA-4931
> URL: https://issues.apache.org/jira/browse/KAFKA-4931
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.10.2.0
>Reporter: Amit Jain
>Assignee: Tom Bentley
>Priority: Minor
>  Labels: patch-available
>
> When run the script: bin/zookeeper-server-stop.sh fails to stop the zookeeper 
> server process if the ps output exceeds 4096 character limit of linux. I 
> think instead of ps we can use ${JAVA_HOME}/bin/jps -vl | grep QuorumPeerMain 
>  it would correctly stop zookeeper process. Currently we are using kill 
> PIDS=$(ps ax | grep java | grep -i QuorumPeerMain | grep -v grep | awk 
> '{print $1}')



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (KAFKA-4931) stop script fails due 4096 ps output limit

2017-07-10 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16080885#comment-16080885
 ] 

Vahid Hashemian edited comment on KAFKA-4931 at 7/10/17 7:01 PM:
-

[~tombentley] What exactly has the 4096 character limit? Is it the output of 
{{ps ax}}? Or something else?
For me in an Ubuntu machine, the output of {{ps ax}} is over 50K long, and the 
stop script works fine.
Could you please clarify? Thanks.


was (Author: vahid):
[~tombentley] What exactly has the 4096 character limit? Is it the output of 
{{ps ax}}? Or something else?
For me in an Ubuntu machine, the output of {{ps ax}} is over 50K long, and the 
stop script works fine.

> stop script fails due 4096 ps output limit
> --
>
> Key: KAFKA-4931
> URL: https://issues.apache.org/jira/browse/KAFKA-4931
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.10.2.0
>Reporter: Amit Jain
>Assignee: Tom Bentley
>Priority: Minor
>  Labels: patch-available
>
> When run the script: bin/zookeeper-server-stop.sh fails to stop the zookeeper 
> server process if the ps output exceeds 4096 character limit of linux. I 
> think instead of ps we can use ${JAVA_HOME}/bin/jps -vl | grep QuorumPeerMain 
>  it would correctly stop zookeeper process. Currently we are using kill 
> PIDS=$(ps ax | grep java | grep -i QuorumPeerMain | grep -v grep | awk 
> '{print $1}')



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (KAFKA-3356) Remove ConsumerOffsetChecker, deprecated in 0.9, in 0.11

2017-07-07 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078812#comment-16078812
 ] 

Vahid Hashemian edited comment on KAFKA-3356 at 7/7/17 11:03 PM:
-

[~ijuma] If {{ConsumerGroupCommand}} needs to provide all existing 
{{ConsumerOffsetChecker}} features I could start a KIP (assuming one is 
required) to cover what [~mimaison] reported as missing:
* Listing all brokers
* Filtering results by a given topic list


was (Author: vahid):
[~ijuma] If {{ConsumerGroupCommand}} needs to provide all the existing 
{{CnosumerOffsetChecker}} properties I could start a KIP (assuming one is 
required) to cover what [~mimaison] reported as missing:
* Listing all brokers
* Filtering results by a given topic list

> Remove ConsumerOffsetChecker, deprecated in 0.9, in 0.11
> 
>
> Key: KAFKA-3356
> URL: https://issues.apache.org/jira/browse/KAFKA-3356
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.10.2.0
>Reporter: Ashish Singh
>Assignee: Mickael Maison
>Priority: Blocker
> Fix For: 0.12.0.0
>
>
> ConsumerOffsetChecker is marked deprecated as of 0.9, should be removed in 
> 0.11.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-3356) Remove ConsumerOffsetChecker, deprecated in 0.9, in 0.11

2017-07-07 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078812#comment-16078812
 ] 

Vahid Hashemian commented on KAFKA-3356:


[~ijuma] If {{ConsumerGroupCommand}} needs to provide all the existing 
{{CnosumerOffsetChecker}} properties I could start a KIP (assuming one is 
required) to cover what [~mimaison] reported as missing:
* Listing all brokers
* Filtering results by a given topic list

> Remove ConsumerOffsetChecker, deprecated in 0.9, in 0.11
> 
>
> Key: KAFKA-3356
> URL: https://issues.apache.org/jira/browse/KAFKA-3356
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.10.2.0
>Reporter: Ashish Singh
>Assignee: Mickael Maison
>Priority: Blocker
> Fix For: 0.12.0.0
>
>
> ConsumerOffsetChecker is marked deprecated as of 0.9, should be removed in 
> 0.11.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5526) KIP-175: ConsumerGroupCommand no longer shows output for consumer groups which have not committed offsets

2017-07-04 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-5526:
---
Summary: KIP-175: ConsumerGroupCommand no longer shows output for consumer 
groups which have not committed offsets  (was: ConsumerGroupCommand no longer 
shows output for consumer groups which have not committed offsets)

> KIP-175: ConsumerGroupCommand no longer shows output for consumer groups 
> which have not committed offsets
> -
>
> Key: KAFKA-5526
> URL: https://issues.apache.org/jira/browse/KAFKA-5526
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ryan P
>Assignee: Vahid Hashemian
>  Labels: kip
>
> It would appear that the latest iteration of the ConsumerGroupCommand no 
> longer outputs information about group membership when no offsets have been 
> committed. It would be nice if the output generated by these tools maintained 
> some form of consistency across versions as some users have grown to depend 
> on them. 
> 0.9.x output:
> bin/kafka-consumer-groups --bootstrap-server localhost:9092 --new-consumer 
> --describe --group console-consumer-34885
> GROUP, TOPIC, PARTITION, CURRENT OFFSET, LOG END OFFSET, LAG, OWNER
> console-consumer-34885, test, 0, unknown, 0, unknown, consumer-1_/192.168.1.64
> 0.10.2 output:
> bin/kafka-consumer-groups --bootstrap-server localhost:9092 --new-consumer 
> --describe --group console-consumer-34885
> Note: This will only show information about consumers that use the Java 
> consumer API (non-ZooKeeper-based consumers).
> TOPIC  PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG 
>CONSUMER-ID   HOST 
>   CLIENT-ID



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-3465) kafka.tools.ConsumerOffsetChecker won't align with kafka New Consumer mode

2017-07-01 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-3465:
---
Affects Version/s: 0.11.0.0
 Priority: Minor  (was: Major)
Fix Version/s: 0.11.1.0

> kafka.tools.ConsumerOffsetChecker won't align with kafka New Consumer mode
> --
>
> Key: KAFKA-3465
> URL: https://issues.apache.org/jira/browse/KAFKA-3465
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.9.0.0, 0.11.0.0
>Reporter: BrianLing
>Assignee: Vahid Hashemian
>Priority: Minor
> Fix For: 0.11.1.0
>
>
> 1. When we enable mirrorMake to migrate Kafka event from one to other with 
> "new.consumer" mode:
> java -Xmx2G -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 
> -XX:InitiatingHeapOccupancyPercent=35 -XX:+DisableExplicitGC 
> -Djava.awt.headless=true -Dcom.sun.management.jmxremote 
> -Dcom.sun.management.jmxremote.authenticate=false 
> -Dcom.sun.management.jmxremote.ssl=false 
> -Dkafka.logs.dir=/kafka/kafka-app-logs 
> -Dlog4j.configuration=file:/kafka/kafka_2.10-0.9.0.0/bin/../config/tools-log4j.properties
>  -cp :/kafka/kafka_2.10-0.9.0.0/bin/../libs/* 
> -Dkafka.logs.filename=lvs-slca-mm.log kafka.tools.MirrorMaker lvs-slca-mm.log 
> --consumer.config ../config/consumer.properties --new.consumer --num.streams 
> 4 --producer.config ../config/producer-slca.properties --whitelist risk.*
> 2. When we use ConsumerOffzsetChecker tool, notice the lag won't changed and 
> the owner is none.
> bin/kafka-run-class.sh  kafka.tools.ConsumerOffsetChecker --broker-info 
> --group lvs.slca.mirrormaker --zookeeper lvsdmetlvm01.lvs.paypal.com:2181 
> --topic 
> Group   Topic  Pid Offset  logSize
>  Lag Owner
> lvs.slca.mirrormaker   0   418578332   418678347   100015 
>  none
> lvs.slca.mirrormaker  1   418598026   418698338   100312  
> none
> [Root Cause]
> I think it's due to 0.9.0 new feature to switch zookeeper dependency to kafka 
> internal to store offset & consumer owner information. 
>   Does it mean we can not use the below command to check new consumer’s 
> lag since current lag formula: lag= logSize – offset 
> https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/tools/ConsumerOffsetChecker.scala#L80
>   
> https://github.com/apache/kafka/blob/0.9.0/core/src/main/scala/kafka/tools/ConsumerOffsetChecker.scala#L174-L182
>  => offSet Fetch from zookeeper instead of from Kafka



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5526) ConsumerGroupCommand no longer shows output for consumer groups which have not committed offsets

2017-06-28 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16067469#comment-16067469
 ] 

Vahid Hashemian commented on KAFKA-5526:


[~hachikuji] Sounds good. I'll start drafting a KIP then.

> ConsumerGroupCommand no longer shows output for consumer groups which have 
> not committed offsets
> 
>
> Key: KAFKA-5526
> URL: https://issues.apache.org/jira/browse/KAFKA-5526
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ryan P
>Assignee: Vahid Hashemian
>
> It would appear that the latest iteration of the ConsumerGroupCommand no 
> longer outputs information about group membership when no offsets have been 
> committed. It would be nice if the output generated by these tools maintained 
> some form of consistency across versions as some users have grown to depend 
> on them. 
> 0.9.x output:
> bin/kafka-consumer-groups --bootstrap-server localhost:9092 --new-consumer 
> --describe --group console-consumer-34885
> GROUP, TOPIC, PARTITION, CURRENT OFFSET, LOG END OFFSET, LAG, OWNER
> console-consumer-34885, test, 0, unknown, 0, unknown, consumer-1_/192.168.1.64
> 0.10.2 output:
> bin/kafka-consumer-groups --bootstrap-server localhost:9092 --new-consumer 
> --describe --group console-consumer-34885
> Note: This will only show information about consumers that use the Java 
> consumer API (non-ZooKeeper-based consumers).
> TOPIC  PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG 
>CONSUMER-ID   HOST 
>   CLIENT-ID



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5526) ConsumerGroupCommand no longer shows output for consumer groups which have not committed offsets

2017-06-27 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16065457#comment-16065457
 ] 

Vahid Hashemian commented on KAFKA-5526:


[~hachikuji] Yeah, that's doable. I assume in this case `--members` would be a 
sub-option of `--describe`.
How do you think we should modify the behavior of `--describe` for the use case 
reported above? Maybe I'm not clear because even with `--members` it seems we 
still could run into the issue reported here.

> ConsumerGroupCommand no longer shows output for consumer groups which have 
> not committed offsets
> 
>
> Key: KAFKA-5526
> URL: https://issues.apache.org/jira/browse/KAFKA-5526
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ryan P
>Assignee: Vahid Hashemian
>
> It would appear that the latest iteration of the ConsumerGroupCommand no 
> longer outputs information about group membership when no offsets have been 
> committed. It would be nice if the output generated by these tools maintained 
> some form of consistency across versions as some users have grown to depend 
> on them. 
> 0.9.x output:
> bin/kafka-consumer-groups --bootstrap-server localhost:9092 --new-consumer 
> --describe --group console-consumer-34885
> GROUP, TOPIC, PARTITION, CURRENT OFFSET, LOG END OFFSET, LAG, OWNER
> console-consumer-34885, test, 0, unknown, 0, unknown, consumer-1_/192.168.1.64
> 0.10.2 output:
> bin/kafka-consumer-groups --bootstrap-server localhost:9092 --new-consumer 
> --describe --group console-consumer-34885
> Note: This will only show information about consumers that use the Java 
> consumer API (non-ZooKeeper-based consumers).
> TOPIC  PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG 
>CONSUMER-ID   HOST 
>   CLIENT-ID



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-5526) ConsumerGroupCommand no longer shows output for consumer groups which have not committed offsets

2017-06-27 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian reassigned KAFKA-5526:
--

Assignee: Vahid Hashemian

> ConsumerGroupCommand no longer shows output for consumer groups which have 
> not committed offsets
> 
>
> Key: KAFKA-5526
> URL: https://issues.apache.org/jira/browse/KAFKA-5526
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ryan P
>Assignee: Vahid Hashemian
>
> It would appear that the latest iteration of the ConsumerGroupCommand no 
> longer outputs information about group membership when no offsets have been 
> committed. It would be nice if the output generated by these tools maintained 
> some form of consistency across versions as some users have grown to depend 
> on them. 
> 0.9.x output:
> bin/kafka-consumer-groups --bootstrap-server localhost:9092 --new-consumer 
> --describe --group console-consumer-34885
> GROUP, TOPIC, PARTITION, CURRENT OFFSET, LOG END OFFSET, LAG, OWNER
> console-consumer-34885, test, 0, unknown, 0, unknown, consumer-1_/192.168.1.64
> 0.10.2 output:
> bin/kafka-consumer-groups --bootstrap-server localhost:9092 --new-consumer 
> --describe --group console-consumer-34885
> Note: This will only show information about consumers that use the Java 
> consumer API (non-ZooKeeper-based consumers).
> TOPIC  PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG 
>CONSUMER-ID   HOST 
>   CLIENT-ID



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-3465) kafka.tools.ConsumerOffsetChecker won't align with kafka New Consumer mode

2017-06-26 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16063618#comment-16063618
 ] 

Vahid Hashemian commented on KAFKA-3465:


[~cmccabe] I have opened a PR to update the relevant documentation 
[here|https://github.com/apache/kafka/pull/3405]. Would that address your 
concern? Thanks. 

> kafka.tools.ConsumerOffsetChecker won't align with kafka New Consumer mode
> --
>
> Key: KAFKA-3465
> URL: https://issues.apache.org/jira/browse/KAFKA-3465
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: BrianLing
>
> 1. When we enable mirrorMake to migrate Kafka event from one to other with 
> "new.consumer" mode:
> java -Xmx2G -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 
> -XX:InitiatingHeapOccupancyPercent=35 -XX:+DisableExplicitGC 
> -Djava.awt.headless=true -Dcom.sun.management.jmxremote 
> -Dcom.sun.management.jmxremote.authenticate=false 
> -Dcom.sun.management.jmxremote.ssl=false 
> -Dkafka.logs.dir=/kafka/kafka-app-logs 
> -Dlog4j.configuration=file:/kafka/kafka_2.10-0.9.0.0/bin/../config/tools-log4j.properties
>  -cp :/kafka/kafka_2.10-0.9.0.0/bin/../libs/* 
> -Dkafka.logs.filename=lvs-slca-mm.log kafka.tools.MirrorMaker lvs-slca-mm.log 
> --consumer.config ../config/consumer.properties --new.consumer --num.streams 
> 4 --producer.config ../config/producer-slca.properties --whitelist risk.*
> 2. When we use ConsumerOffzsetChecker tool, notice the lag won't changed and 
> the owner is none.
> bin/kafka-run-class.sh  kafka.tools.ConsumerOffsetChecker --broker-info 
> --group lvs.slca.mirrormaker --zookeeper lvsdmetlvm01.lvs.paypal.com:2181 
> --topic 
> Group   Topic  Pid Offset  logSize
>  Lag Owner
> lvs.slca.mirrormaker   0   418578332   418678347   100015 
>  none
> lvs.slca.mirrormaker  1   418598026   418698338   100312  
> none
> [Root Cause]
> I think it's due to 0.9.0 new feature to switch zookeeper dependency to kafka 
> internal to store offset & consumer owner information. 
>   Does it mean we can not use the below command to check new consumer’s 
> lag since current lag formula: lag= logSize – offset 
> https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/tools/ConsumerOffsetChecker.scala#L80
>   
> https://github.com/apache/kafka/blob/0.9.0/core/src/main/scala/kafka/tools/ConsumerOffsetChecker.scala#L174-L182
>  => offSet Fetch from zookeeper instead of from Kafka



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-4585) KIP-163: Offset fetch and commit requests use the same permissions

2017-06-22 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-4585:
---
Summary: KIP-163: Offset fetch and commit requests use the same permissions 
 (was: Offset fetch and commit requests use the same permissions)

> KIP-163: Offset fetch and commit requests use the same permissions
> --
>
> Key: KAFKA-4585
> URL: https://issues.apache.org/jira/browse/KAFKA-4585
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.10.1.1
>Reporter: Ewen Cheslack-Postava
>Assignee: Vahid Hashemian
>  Labels: kip
>
> Currently the handling of permissions for consumer groups seems a bit odd 
> because most of the requests use the Read permission on the Group (join, 
> sync, heartbeat, leave, offset commit, and offset fetch). This means you 
> cannot lock down certain functionality for certain users. For this issue I'll 
> highlight a realistic issue since conflating the ability to perform most of 
> these operations may not be a serious issue.
> In particular, if you want tooling for monitoring offsets (i.e. you want to 
> be able to read from all groups) but don't want that tool to be able to write 
> offsets, you currently cannot achieve this. Part of the reason this seems odd 
> to me is that any operation which can mutate state seems like it should be a 
> Write operation (i.e. joining, syncing, leaving, and committing; maybe 
> heartbeat as well). However, [~hachikuji] has mentioned that the use of Read 
> may have been intentional. If that is the case, changing at least offset 
> fetch to be a Describe operation instead would allow isolating the mutating 
> vs non-mutating request types.
> Note that this would require a KIP and would potentially have some 
> compatibility implications. Note however, that if we went with the Describe 
> option, Describe is allowed by default when Read, Write, or Delete are 
> allowed, so this may not have to have any compatibility issues (if the user 
> previously allowed Read, they'd still have all the same capabilities as 
> before).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-4201) Add an --assignment-strategy option to new-consumer-based Mirror Maker

2017-06-19 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16054915#comment-16054915
 ] 

Vahid Hashemian commented on KAFKA-4201:


[~johnma] It seems to me that the focus of 
[KAFKA-2111|https://issues.apache.org/jira/browse/KAFKA-2111] is on existing 
arguments of Kafka tools. [This 
JIRA|https://issues.apache.org/jira/browse/KAFKA-4201] however introduces a new 
argument. I would suggest handling existing and approved arguments in 
[KAFKA-2111|https://issues.apache.org/jira/browse/KAFKA-2111]. If necessary, 
the [existing PR for this JIRA|https://github.com/apache/kafka/pull/1912] can 
later be rebased to comply with any standardization introduced in 
[KAFKA-2111|https://issues.apache.org/jira/browse/KAFKA-2111]. I hope it makes 
sense. Please advise if you disagree. Thanks. 

> Add an --assignment-strategy option to new-consumer-based Mirror Maker
> --
>
> Key: KAFKA-4201
> URL: https://issues.apache.org/jira/browse/KAFKA-4201
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Vahid Hashemian
>Assignee: Vahid Hashemian
> Fix For: 0.11.1.0
>
>
> The default assignment strategy in mirror maker will be changed from range to 
> round robin in an upcoming release ([see 
> KAFKA-3818|https://issues.apache.org/jira/browse/KAFKA-3818]). In order to 
> make it easier for users to change the assignment strategy, add an 
> {{--assignment-strategy}} option to Mirror Maker command line tool.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-3438) Rack Aware Replica Reassignment should warn of overloaded brokers

2017-06-16 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-3438:
---
Fix Version/s: 0.11.1.0

> Rack Aware Replica Reassignment should warn of overloaded brokers
> -
>
> Key: KAFKA-3438
> URL: https://issues.apache.org/jira/browse/KAFKA-3438
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.10.0.0
>Reporter: Ben Stopford
>Assignee: Vahid Hashemian
> Fix For: 0.11.1.0
>
>
> We've changed the replica reassignment code to be rack aware.
> One problem that might catch users out would be that they rebalance the 
> cluster using kafka-reassign-partitions.sh but their rack configuration means 
> that some high proportion of replicas are pushed onto a single, or small 
> number of, brokers. 
> This should be an easy problem to avoid, by changing the rack assignment 
> information, but we should probably warn users if they are going to create 
> something that is unbalanced. 
> So imagine I have a Kafka cluster of 12 nodes spread over two racks with rack 
> awareness enabled. If I add a 13th machine, on a new rack, and run the 
> rebalance tool, that new machine will get ~6x as many replicas as the least 
> loaded broker. 
> Suggest a warning  be added to the tool output when --generate is called. 
> "The most loaded broker has 2.3x as many replicas as the the least loaded 
> broker. This is likely due to an uneven distribution of brokers across racks. 
> You're advised to alter the rack config so there are approximately the same 
> number of brokers per rack" and displays the individual rack→#brokers and 
> broker→#replicas data for the proposed move.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-4126) No relevant log when the topic is non-existent

2017-06-16 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-4126:
---
Fix Version/s: 0.11.1.0

> No relevant log when the topic is non-existent
> --
>
> Key: KAFKA-4126
> URL: https://issues.apache.org/jira/browse/KAFKA-4126
> Project: Kafka
>  Issue Type: Bug
>Reporter: Balázs Barnabás
>Assignee: Vahid Hashemian
>Priority: Minor
> Fix For: 0.11.1.0
>
>
> When a producer sends a ProducerRecord into a Kafka topic that doesn't 
> existst, there is no relevant debug/error log that points out the error.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-3999) Consumer bytes-fetched metric uses decompressed message size

2017-06-16 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-3999:
---
Fix Version/s: 0.11.1.0

> Consumer bytes-fetched metric uses decompressed message size
> 
>
> Key: KAFKA-3999
> URL: https://issues.apache.org/jira/browse/KAFKA-3999
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.1, 0.10.0.0
>Reporter: Jason Gustafson
>Assignee: Vahid Hashemian
>Priority: Minor
> Fix For: 0.11.1.0
>
>
> It looks like the computation for the bytes-fetched metrics uses the size of 
> the decompressed message set. I would have expected it to be based off of the 
> raw size of the fetch responses. Perhaps it would be helpful to expose both 
> the raw and decompressed fetch sizes? 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-4201) Add an --assignment-strategy option to new-consumer-based Mirror Maker

2017-06-16 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-4201:
---
Fix Version/s: 0.11.1.0

> Add an --assignment-strategy option to new-consumer-based Mirror Maker
> --
>
> Key: KAFKA-4201
> URL: https://issues.apache.org/jira/browse/KAFKA-4201
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Vahid Hashemian
>Assignee: Vahid Hashemian
> Fix For: 0.11.1.0
>
>
> The default assignment strategy in mirror maker will be changed from range to 
> round robin in an upcoming release ([see 
> KAFKA-3818|https://issues.apache.org/jira/browse/KAFKA-3818]). In order to 
> make it easier for users to change the assignment strategy, add an 
> {{--assignment-strategy}} option to Mirror Maker command line tool.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-4108) Improve DumpLogSegments offsets-decoder output format

2017-06-16 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-4108:
---
Fix Version/s: 0.11.1.0

> Improve DumpLogSegments offsets-decoder output format
> -
>
> Key: KAFKA-4108
> URL: https://issues.apache.org/jira/browse/KAFKA-4108
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Reporter: Jason Gustafson
>Assignee: Vahid Hashemian
> Fix For: 0.11.1.0
>
>
> When using the DumpLogSegments with the "--offsets-decoder" option (for 
> consuming __consumer_offsets), the encoding of group metadata makes it a 
> little difficult to identify individual fields. In particular, we use the 
> following formatted string for group metadata: 
> {code}
> ${protocolType}:${groupMetadata.protocol}:${groupMetadata.generationId}:${assignment}
> {code}
> Keys have a similar formatting. Most users are probably not going to know 
> which field is which based only on the output, so it would be helpful to 
> include field names. Maybe we could just output a JSON object?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5462) Add a configuration for users to specify a template for building a custom principal name

2017-06-16 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16052548#comment-16052548
 ] 

Vahid Hashemian commented on KAFKA-5462:


[~Koelli] Could you please elaborate and provide details about this bug?

> Add a configuration for users to specify a template for building a custom 
> principal name
> 
>
> Key: KAFKA-5462
> URL: https://issues.apache.org/jira/browse/KAFKA-5462
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.10.2.1
>Reporter: Koelli Mungee
>
> Add a configuration for users to specify a template for building a custom 
> principal name.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


<    1   2   3