[jira] [Comment Edited] (KAFKA-4788) Broker level configuration 'log.segment.bytes' not used when 'segment.bytes' not configured per topic.

2017-02-23 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15880385#comment-15880385
 ] 

Manikumar edited comment on KAFKA-4788 at 2/23/17 1:03 PM:
---

[~ijuma]  While using kafka-topics.sh, we are doing topic-level config 
validation on the client (TopicCommand) side, not at broker side. So we don't 
have access to broker configs and KAFKA-4092 validation won't work. This 
validation can be added to Create Topic/Admin Request request code path.

Also, KAFKA-4092 change may not be required. it's just config error, not a bug. 
We don't have this check for regular broker configs (without topic-level 
configs). Maybe we can revert the patch. 


was (Author: omkreddy):
[~ijuma]  While using kafka-topics.sh, we are doing topic-level config 
validation on the client (TopicCommand) side, not at broker side. So we don't 
have access to broker configs and KAFKA-4092 validation does not work. This 
validation can be added to Create Topic/Admin Request request code path.

Also, KAFKA-4092 change may not be required. it's just config error, not a bug. 
We don't have this check for default broker configs (without topic-level 
configs). Maybe we can revert the patch. 

> Broker level configuration 'log.segment.bytes' not used when 'segment.bytes' 
> not configured per topic.
> --
>
> Key: KAFKA-4788
> URL: https://issues.apache.org/jira/browse/KAFKA-4788
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.10.2.0
>Reporter: Ciprian Pascu
>  Labels: regression
> Fix For: 0.10.3.0, 0.10.2.1
>
>
> In the previous version, if there was not topic level configuration 
> 'segment.bytes', then the corresponding value from the broker configuration 
> was used (for 'segment.bytes', this is the value configured for 
> 'log.segment.bytes'); in 0.10.2.0, if the configuration at topic level is 
> missing, then the value configured at broker level is not used, some kind of 
> default value for that broker level configuration is used (for 
> 'log.retention.bytes', this is 'val LogSegmentBytes = 1 * 1024 * 1024 * 
> 1024').
> However, in the documentation it is said: 'A given server default config 
> value only applies to a topic if it does not have an explicit topic config 
> override.'; so, according to this, if there is, for example, no value 
> configured for 'segment.bytes', then the value configured for 
> 'log.segment.bytes' should be used.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4788) Broker level configuration 'log.segment.bytes' not used when 'segment.bytes' not configured per topic.

2017-02-23 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15880385#comment-15880385
 ] 

Manikumar commented on KAFKA-4788:
--

[~ijuma]  While using kafka-topics.sh, we are doing topic-level config 
validation on the client (TopicCommand) side, not at broker side. So we don't 
have access to broker configs and KAFKA-4092 validation does not work. This 
validation can be added to Create Topic/Admin Request request code path.

Also, KAFKA-4092 change may not be required. it's just config error, not a bug. 
We don't have this check for default broker configs (without topic-level 
configs). Maybe we can revert the patch. 

> Broker level configuration 'log.segment.bytes' not used when 'segment.bytes' 
> not configured per topic.
> --
>
> Key: KAFKA-4788
> URL: https://issues.apache.org/jira/browse/KAFKA-4788
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.10.2.0
>Reporter: Ciprian Pascu
>  Labels: regression
> Fix For: 0.10.3.0, 0.10.2.1
>
>
> In the previous version, if there was not topic level configuration 
> 'segment.bytes', then the corresponding value from the broker configuration 
> was used (for 'segment.bytes', this is the value configured for 
> 'log.segment.bytes'); in 0.10.2.0, if the configuration at topic level is 
> missing, then the value configured at broker level is not used, some kind of 
> default value for that broker level configuration is used (for 
> 'log.retention.bytes', this is 'val LogSegmentBytes = 1 * 1024 * 1024 * 
> 1024').
> However, in the documentation it is said: 'A given server default config 
> value only applies to a topic if it does not have an explicit topic config 
> override.'; so, according to this, if there is, for example, no value 
> configured for 'segment.bytes', then the value configured for 
> 'log.segment.bytes' should be used.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4788) Broker level configuration 'log.segment.bytes' not used when 'segment.bytes' not configured per topic.

2017-02-23 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15880281#comment-15880281
 ] 

Manikumar commented on KAFKA-4788:
--

looks like KAFKA-4092 change introduced this issue.  During validation, we are 
not inlcuding original/broker props. 

> Broker level configuration 'log.segment.bytes' not used when 'segment.bytes' 
> not configured per topic.
> --
>
> Key: KAFKA-4788
> URL: https://issues.apache.org/jira/browse/KAFKA-4788
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.10.2.0
>Reporter: Ciprian Pascu
>
> In the previous version, if there was not topic level configuration 
> 'segment.bytes', then the corresponding value from the broker configuration 
> was used (for 'segment.bytes', this is the value configured for 
> 'log.segment.bytes'); in 0.10.2.0, if the configuration at topic level is 
> missing, then the value configured at broker level is not used, some kind of 
> default value for that broker level configuration is used (for 
> 'log.retention.bytes', this is 'val LogSegmentBytes = 1 * 1024 * 1024 * 
> 1024').
> However, in the documentation it is said: 'A given server default config 
> value only applies to a topic if it does not have an explicit topic config 
> override.'; so, according to this, if there is, for example, no value 
> configured for 'segment.bytes', then the value configured for 
> 'log.segment.bytes' should be used.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (KAFKA-1516) Producer Performance Test sends messages with bytes of 0x0

2017-03-30 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1516.
--
Resolution: Fixed

Resolving this as  support for passing custom message payloads is given in 
KAFKA-4432

> Producer Performance Test sends messages with bytes of 0x0
> --
>
> Key: KAFKA-1516
> URL: https://issues.apache.org/jira/browse/KAFKA-1516
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.1.1
>Reporter: Daniel Compton
>Priority: Minor
>
> The producer performance test in Kafka sends messages with either [0x0 
> bytes|https://github.com/apache/kafka/blob/0.8.1/perf/src/main/scala/kafka/perf/ProducerPerformance.scala#L237]
>  or messages with [all 
> X's|https://github.com/apache/kafka/blob/0.8.1/perf/src/main/scala/kafka/perf/ProducerPerformance.scala#L225].
>  This skews the compression ratio massively and probably affects performance 
> in other ways.
> We want to create messages which will give a more realistic performance 
> profile. Using random bytes may not be the best solution as these won't 
> compress at all and will skew compression times.
> Perhaps using a template which injects random or sequential data into it 
> could work. Or maybe I'm overthinking it and we should just go for random 
> bytes. What other options do we have? Others seem to use random bytes like 
> [cassandra-stress|https://github.com/zznate/cassandra-stress/blob/master/src/main/java/com/riptano/cassandra/stress/InsertCommand.java#L39].



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (KAFKA-4807) Kafka broker fail over bug in zookeeper

2017-03-10 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4807.
--
Resolution: Duplicate

> Kafka broker fail over bug in zookeeper
> ---
>
> Key: KAFKA-4807
> URL: https://issues.apache.org/jira/browse/KAFKA-4807
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.10.0.0
> Environment: cloudera-0.10.0-kafka-2.1.0
>Reporter: Mingzhou Zhuang
>
> https://stackoverflow.com/questions/42362911/kafka-high-level-consumer-error-code-15/42416232#42416232
> I'm using Cloudera-0.10.0-kafka-2.1.0.
> I met the same problem and solved it by deleting old zookeeper data.
> However, the old kafka-console-consumer works well before deleting zookeeper 
> data.
> Maybe there's something wrong with the broker fail over in the new consumer 
> api?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (KAFKA-639) RESTful proxy

2017-03-10 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-639.
-
Resolution: Duplicate

> RESTful proxy
> -
>
> Key: KAFKA-639
> URL: https://issues.apache.org/jira/browse/KAFKA-639
> Project: Kafka
>  Issue Type: New Feature
>  Components: contrib
>Reporter: David Arthur
>Priority: Minor
>
> An issue to track work on a RESTful proxy 
> See initial discussion here: 
> http://mail-archives.apache.org/mod_mbox/incubator-kafka-dev/201211.mbox/%3C00AF3218-0AA9-42EA-B601-F54B167FDCC0%40gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (KAFKA-4864) Kafka Secure Migrator tool doesn't secure all the nodes

2017-03-09 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar closed KAFKA-4864.


> Kafka Secure Migrator tool doesn't secure all the nodes
> ---
>
> Key: KAFKA-4864
> URL: https://issues.apache.org/jira/browse/KAFKA-4864
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.0, 0.10.1.1, 0.10.2.0
>Reporter: Stephane Maarek
>Priority: Critical
> Fix For: 0.11.0.0, 0.10.2.1
>
>
> It seems that the secure nodes as referred by ZkUtils.scala are the following:
> https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/utils/ZkUtils.scala#L201
> A couple things:
> - the list is highly outdated, and for example the most important nodes such 
> as kafka-acls don't get secured. That's a huge security risk. Would it be 
> better to just secure all the nodes recursively from the given root?
> - the root of some nodes aren't secured. Ex: /brokers (but many others).
> The result is the following after running the tool:
> zookeeper-security-migration --zookeeper.acl secure --zookeeper.connect 
> zoo1:2181/kafka-test
> [zk: localhost:2181(CONNECTED) 9] getAcl /kafka-test/brokers
> 'world,'anyone
> : cdrwa
> [zk: localhost:2181(CONNECTED) 11] getAcl /kafka-test/brokers/ids
> 'world,'anyone
> : r
> 'sasl,'myzkcli...@example.com
> : cdrwa
> [zk: localhost:2181(CONNECTED) 16] getAcl /kafka-test/kafka-acl
> 'world,'anyone
> : cdrwa
> That seems pretty bad to be honest... A fast enough ZkClient could delete 
> some root nodes, and create the nodes they like before the Acls get set. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (KAFKA-3990) Kafka New Producer may raise an OutOfMemoryError

2017-03-10 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3990.
--
Resolution: Duplicate

> Kafka New Producer may raise an OutOfMemoryError
> 
>
> Key: KAFKA-3990
> URL: https://issues.apache.org/jira/browse/KAFKA-3990
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.1
> Environment: Docker, Base image : CentOS
> Java 8u77
> Marathon
>Reporter: Brice Dutheil
> Attachments: app-producer-config.log, kafka-broker-logs.zip
>
>
> We are regularly seeing OOME errors on a kafka producer, we first saw :
> {code}
> java.lang.OutOfMemoryError: Java heap space
> at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57) ~[na:1.8.0_77]
> at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) ~[na:1.8.0_77]
> at 
> org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:93)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:153) 
> ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:134) 
> ~[kafka-clients-0.9.0.1.jar:na]
> at org.apache.kafka.common.network.Selector.poll(Selector.java:286) 
> ~[kafka-clients-0.9.0.1.jar:na]
> at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:256) 
> ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:216) 
> ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:128) 
> ~[kafka-clients-0.9.0.1.jar:na]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_77]
> {code}
> This line refer to a buffer allocation {{ByteBuffer.allocate(receiveSize)}} 
> (see 
> https://github.com/apache/kafka/blob/0.9.0.1/clients/src/main/java/org/apache/kafka/common/network/NetworkReceive.java#L93)
> Usually the app runs fine within 200/400 MB heap and a 64 MB Metaspace. And 
> we are producing small messages 500B at most.
> Also the error don't appear on the devlopment environment, in order to 
> identify the issue we tweaked the code to give us actual data of the 
> allocation size, we got this stack :
> {code}
> 09:55:49.484 [auth] [kafka-producer-network-thread | producer-1] WARN  
> o.a.k.c.n.NetworkReceive HEAP-ISSUE: constructor : Integer='-1', String='-1'
> 09:55:49.485 [auth] [kafka-producer-network-thread | producer-1] WARN  
> o.a.k.c.n.NetworkReceive HEAP-ISSUE: method : 
> NetworkReceive.readFromReadableChannel.receiveSize=1213486160
> java.lang.OutOfMemoryError: Java heap space
> Dumping heap to /tmp/tomcat.hprof ...
> Heap dump file created [69583827 bytes in 0.365 secs]
> 09:55:50.324 [auth] [kafka-producer-network-thread | producer-1] ERROR 
> o.a.k.c.utils.KafkaThread Uncaught exception in kafka-producer-network-thread 
> | producer-1: 
> java.lang.OutOfMemoryError: Java heap space
>   at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57) ~[na:1.8.0_77]
>   at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) ~[na:1.8.0_77]
>   at 
> org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:93)
>  ~[kafka-clients-0.9.0.1.jar:na]
>   at 
> org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71)
>  ~[kafka-clients-0.9.0.1.jar:na]
>   at 
> org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:153) 
> ~[kafka-clients-0.9.0.1.jar:na]
>   at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:134) 
> ~[kafka-clients-0.9.0.1.jar:na]
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:286) 
> ~[kafka-clients-0.9.0.1.jar:na]
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:256) 
> ~[kafka-clients-0.9.0.1.jar:na]
>   at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:216) 
> ~[kafka-clients-0.9.0.1.jar:na]
>   at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:128) 
> ~[kafka-clients-0.9.0.1.jar:na]
>   at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_77]
> {code}
> Notice the size to allocate {{1213486160}} ~1.2 GB. I'm not yet sure how this 
> size is initialised.
> Notice as well that every time this OOME appear the {{NetworkReceive}} 
> constructor at 
> https://github.com/apache/kafka/blob/0.9.0.1/clients/src/main/java/org/apache/kafka/common/network/NetworkReceive.java#L49
>  receive the parameters : {{maxSize=-1}}, {{source="-1"}}
> We may have missed configuration in our setup but kafka clients shouldn't 
> raise an OOME. For reference the producer is initialised with :
> {code}
> Properties 

[jira] [Resolved] (KAFKA-1436) Idempotent Producer / Duplicate Detection

2017-03-10 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1436.
--
Resolution: Duplicate

> Idempotent Producer / Duplicate Detection
> -
>
> Key: KAFKA-1436
> URL: https://issues.apache.org/jira/browse/KAFKA-1436
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer, producer 
>Affects Versions: 0.8.0, 0.8.1, 0.8.1.1, 0.8.2.0, 0.10.1.0
>Reporter: James Thornton
>Assignee: Neha Narkhede
>
> Dealing with duplicate messages is one of the major issues for teams using 
> Kafka, and Jay Kreps posted a page on implementing an Idempotent Producer to 
> address this issue:
> https://cwiki.apache.org/confluence/display/KAFKA/Idempotent+Producer
> MapDB 1.0 (https://github.com/jankotek/MapDB) was just released, and either 
> it or Java Chronicle (https://github.com/OpenHFT/Java-Chronicle/) could be 
> embedded within each broker to provide a high-performance, random-access, 
> off-heap store for request IDs.
> As Jay points out in his post, global unique request IDs probably aren't 
> needed, but if that need should arise, Twitter's Snowflake service 
> (https://github.com/twitter/snowflake/) might be useful.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (KAFKA-3391) Kafka to ZK timeout

2017-03-10 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3391.
--
Resolution: Not A Problem

 Please reopen if the issue still exists. 

> Kafka to ZK timeout 
> 
>
> Key: KAFKA-3391
> URL: https://issues.apache.org/jira/browse/KAFKA-3391
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, zkclient
>Affects Versions: 0.8.2.0
> Environment: RHEL 7.2, AWS EC2 compute instance
>Reporter: Karthik Reddy
>Assignee: Neha Narkhede
>Priority: Critical
>
> Hi Team,
> We have seen the below messages in the Kafka logs, indicating there was a 
> timeout on ZK.
> Could you please advise us on how to tune or better optimize the Kafka-ZK 
> communication.
> Kafka and ZK are on separate servers.Currently, we have the ZK timeout set to 
> 6000 ms.
> Kafka servers have EBS volumes as the disk.
> We had to restart our consumers and ZK to resolve this issue.
> [2016-03-10 02:29:25,858] INFO Unable to read additional data from server 
> sessionid 0x5531d0003f30030, likely server has closed socket, closing socket 
> connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
> [2016-03-10 02:29:25,958] INFO zookeeper state changed (Disconnected) 
> (org.I0Itec.zkclient.ZkClient)
> [2016-03-10 02:29:26,381] INFO Opening socket connection to server 
> 10.200.77.74/10.200.77.74:8164. Will not attempt to authenticate using SASL 
> (unknown error) (org.apache.zookeeper.ClientCnxn)
> [2016-03-10 02:29:26,382] INFO Socket connection established to 
> 10.200.77.74/10.200.77.74:8164, initiating session 
> (org.apache.zookeeper.ClientCnxn)
> [2016-03-10 02:29:26,385] INFO Session establishment complete on server 
> 10.200.77.74/10.200.77.74:8164, sessionid = 0x5531d0003f30030, negotiated 
> timeout = 6000 (org.apache.zookeeper.ClientCnxn)
> [2016-03-10 02:29:26,385] INFO zookeeper state changed (SyncConnected) 
> (org.I0Itec.zkclient.ZkClient)
> [2016-03-10 02:29:30,961] INFO conflict in /controller data: 
> {"version":1,"brokerid":3,"timestamp":"1457594970952"} stored data: 
> {"version":1,"brokerid":5,"timestamp":"1457594970043"} (kafka.utils.ZkUtils$)
> [2016-03-10 02:29:30,969] INFO New leader is 5 
> (kafka.server.ZookeeperLeaderElector$LeaderChangeListener)
> [2016-03-10 02:29:31,620] INFO [ReplicaFetcherManager on broker 3] Removed 
> fetcher for partitions 
> [__consumer_offsets,0],[fulfillment.payments.autopay.mongooperation.response,1],[__consumer_offsets,20],[__consumer_offsets,40]
>  (kafka.server.ReplicaFetcherManager)
> [2016-03-10 02:29:31,621] INFO [ReplicaFetcherManager on broker 3] Removed 
> fetcher for partitions 
> [efit.framework.notification.error,1],[__consumer_offsets,15],[fulfillment.payments.autopay.processexception.notification,1],[__consumer_offsets,35]
>  (kafka.server.ReplicaFetcherManager)
> [2016-03-10 02:29:31,621] INFO Truncating log 
> efit.framework.notification.error-1 to offset 637. (kafka.log.Log)
> [2016-03-10 02:29:31,621] INFO Truncating log __consumer_offsets-15 to offset 
> 0. (kafka.log.Log)
> [2016-03-10 02:29:31,622] INFO Truncating log 
> fulfillment.payments.autopay.processexception.notification-1 to offset 0. 
> (kafka.log.Log)
> [2016-03-10 02:29:31,622] INFO Truncating log __consumer_offsets-35 to offset 
> 0. (kafka.log.Log)
> [2016-03-10 02:29:31,623] INFO Loading offsets from [__consumer_offsets,0] 
> (kafka.server.OffsetManager)
> [2016-03-10 02:29:31,624] INFO Loading offsets from [__consumer_offsets,20] 
> (kafka.server.OffsetManager)
> [2016-03-10 02:29:31,624] INFO Finished loading offsets from 
> [__consumer_offsets,0] in 1 milliseconds. (kafka.server.OffsetManager)
> [2016-03-10 02:29:31,625] INFO Loading offsets from [__consumer_offsets,40] 
> (kafka.server.OffsetManager)
> [2016-03-10 02:29:31,625] INFO Finished loading offsets from 
> [__consumer_offsets,20] in 1 milliseconds. (kafka.server.OffsetManager)
> [2016-03-10 02:29:31,625] INFO Finished loading offsets from 
> [__consumer_offsets,40] in 0 milliseconds. (kafka.server.OffsetManager)
> [2016-03-10 02:29:31,627] INFO [ReplicaFetcherManager on broker 3] Added 
> fetcher for partitions List([[efit.framework.notification.error,1], 
> initOffset 637 to broker id:1,host:10.200.77.78,port:8165] , 
> [[__consumer_offsets,15], initOffset 0 to broker 
> id:1,host:10.200.77.78,port:8165] , 
> [[fulfillment.payments.autopay.processexception.notification,1], initOffset 0 
> to broker id:5,host:10.200.75.150,port:8165] , [[__consumer_offsets,35], 
> initOffset 0 to broker id:1,host:10.200.77.78,port:8165] ) 
> (kafka.server.ReplicaFetcherManager)
> [2016-03-10 02:29:31,627] INFO [ReplicaFetcherThread-0-2], Shutting down 
> (kafka.server.ReplicaFetcherThread
> Thanks,
> Karthik



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4807) Kafka broker fail over bug in zookeeper

2017-03-03 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15894097#comment-15894097
 ] 

Manikumar commented on KAFKA-4807:
--

This is related to KAFKA-3959.  offsets.topic.replication.factor broker config 
is now enforced upon auto topic creation. This will be available in 0.11 
release.

> Kafka broker fail over bug in zookeeper
> ---
>
> Key: KAFKA-4807
> URL: https://issues.apache.org/jira/browse/KAFKA-4807
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.10.0.0
> Environment: cloudera-0.10.0-kafka-2.1.0
>Reporter: Mingzhou Zhuang
>
> https://stackoverflow.com/questions/42362911/kafka-high-level-consumer-error-code-15/42416232#42416232
> I'm using Cloudera-0.10.0-kafka-2.1.0.
> I met the same problem and solved it by deleting old zookeeper data.
> However, the old kafka-console-consumer works well before deleting zookeeper 
> data.
> Maybe there's something wrong with the broker fail over in the new consumer 
> api?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4795) Confusion around topic deletion

2017-03-03 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15894178#comment-15894178
 ] 

Manikumar commented on KAFKA-4795:
--

[~vahid]  If we disable topic deletion then we clearing the delete topic path 
in ZK. This is added in KAFKA-3175. On controller change, we will delete the zk 
delete topic path.

> Confusion around topic deletion
> ---
>
> Key: KAFKA-4795
> URL: https://issues.apache.org/jira/browse/KAFKA-4795
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.2.0
>Reporter: Vahid Hashemian
>Assignee: Vahid Hashemian
>
> The topic deletion works like in 0.10.2.0:
> # {{bin/zookeeper-server-start.sh config/zookeeper.properties}}
> # {{bin/kafka-server-start.sh config/server.properties}} (uses default 
> {{server.properties}})
> # {{bin/kafka-topics.sh --zookeeper localhost:2181 --create --topic test 
> --replication-factor 1 --partitions 1}} (creates the topic {{test}})
> # {{bin/kafka-topics.sh --zookeeper localhost:2181 --list}} (returns {{test}})
> # {{bin/kafka-topics.sh --zookeeper localhost:2181 --delete --topic test}} 
> (reports {{Topic test is marked for deletion. Note: This will have no impact 
> if delete.topic.enable is not set to true.}})
> # {{bin/kafka-topics.sh --zookeeper localhost:2181 --list}} (returns {{test}})
> Previously, the last command above returned {{test - marked for deletion}}, 
> which matched the output statement of the {{--delete}} topic command.
> Continuing with the above scenario,
> # stop the broker
> # add the broker config {{delete.topic.enable=true}} in the config file
> # {{bin/kafka-server-start.sh config/server.properties}} (this does not 
> remove the topic {{test}}, as if the topic was never marked for deletion).
> # {{bin/kafka-topics.sh --zookeeper localhost:2181 --delete --topic test}} 
> (reports {{Topic test is marked for deletion. Note: This will have no impact 
> if delete.topic.enable is not set to true.}})
> # {{bin/kafka-topics.sh --zookeeper localhost:2181 --list}} (returns no 
> topics).
> It seems that the "marked for deletion" state for topics no longer exists.
> I opened this JIRA so I can get a confirmation on the expected topic deletion 
> behavior, because in any case, I think the user experience could be improved 
> (either there is a bug in the code, or the command's output statement is 
> misleading).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (KAFKA-4795) Confusion around topic deletion

2017-03-03 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15894178#comment-15894178
 ] 

Manikumar edited comment on KAFKA-4795 at 3/3/17 11:26 AM:
---

[~vahid]  If we disable topic deletion then we are clearing the delete topic 
path in ZK. This is added in KAFKA-3175. On controller change, we will delete 
the zk delete topic path.


was (Author: omkreddy):
[~vahid]  If we disable topic deletion then we clearing the delete topic path 
in ZK. This is added in KAFKA-3175. On controller change, we will delete the zk 
delete topic path.

> Confusion around topic deletion
> ---
>
> Key: KAFKA-4795
> URL: https://issues.apache.org/jira/browse/KAFKA-4795
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.2.0
>Reporter: Vahid Hashemian
>Assignee: Vahid Hashemian
>
> The topic deletion works like in 0.10.2.0:
> # {{bin/zookeeper-server-start.sh config/zookeeper.properties}}
> # {{bin/kafka-server-start.sh config/server.properties}} (uses default 
> {{server.properties}})
> # {{bin/kafka-topics.sh --zookeeper localhost:2181 --create --topic test 
> --replication-factor 1 --partitions 1}} (creates the topic {{test}})
> # {{bin/kafka-topics.sh --zookeeper localhost:2181 --list}} (returns {{test}})
> # {{bin/kafka-topics.sh --zookeeper localhost:2181 --delete --topic test}} 
> (reports {{Topic test is marked for deletion. Note: This will have no impact 
> if delete.topic.enable is not set to true.}})
> # {{bin/kafka-topics.sh --zookeeper localhost:2181 --list}} (returns {{test}})
> Previously, the last command above returned {{test - marked for deletion}}, 
> which matched the output statement of the {{--delete}} topic command.
> Continuing with the above scenario,
> # stop the broker
> # add the broker config {{delete.topic.enable=true}} in the config file
> # {{bin/kafka-server-start.sh config/server.properties}} (this does not 
> remove the topic {{test}}, as if the topic was never marked for deletion).
> # {{bin/kafka-topics.sh --zookeeper localhost:2181 --delete --topic test}} 
> (reports {{Topic test is marked for deletion. Note: This will have no impact 
> if delete.topic.enable is not set to true.}})
> # {{bin/kafka-topics.sh --zookeeper localhost:2181 --list}} (returns no 
> topics).
> It seems that the "marked for deletion" state for topics no longer exists.
> I opened this JIRA so I can get a confirmation on the expected topic deletion 
> behavior, because in any case, I think the user experience could be improved 
> (either there is a bug in the code, or the command's output statement is 
> misleading).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (KAFKA-1832) Async Producer will cause 'java.net.SocketException: Too many open files' when broker host does not exist

2017-08-15 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1832.
--
Resolution: Fixed

Fixed in  KAFKA-1041

> Async Producer will cause 'java.net.SocketException: Too many open files' 
> when broker host does not exist
> -
>
> Key: KAFKA-1832
> URL: https://issues.apache.org/jira/browse/KAFKA-1832
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.1, 0.8.1.1
> Environment: linux
>Reporter: barney
>Assignee: Jun Rao
>
> h3.How to replay the problem:
> * producer configuration:
> ** producer.type=async
> ** metadata.broker.list=not.existed.com:9092
> Make sure the host '*not.existed.com*' does not exist in DNS server or 
> /etc/hosts;
> * send a lot of messages continuously using the above producer
> It will cause '*java.net.SocketException: Too many open files*' after a 
> while, or you can use '*lsof -p $pid|wc -l*' to check the count of open files 
> which will be increasing as time goes by until it reaches the system 
> limit(check by '*ulimit -n*').
> h3.Problem cause:
> {code:title=kafka.network.BlockingChannel|borderStyle=solid} 
> channel.connect(new InetSocketAddress(host, port))
> {code}
> this line will throw an exception 
> '*java.nio.channels.UnresolvedAddressException*' when broker host does not 
> exist, and at this same time the field '*connected*' is false;
> In *kafka.producer.SyncProducer*, '*disconnect()*' will not invoke 
> '*blockingChannel.disconnect()*' because '*blockingChannel.isConnected*' is 
> false which means the FileDescriptor will be created but never closed;
> h3.More:
> When the broker is an non-existent ip(for example: 
> metadata.broker.list=1.1.1.1:9092) instead of an non-existent host, the 
> problem will not appear;
> In *SocketChannelImpl.connect()*, '*Net.checkAddress()*' is not in try-catch 
> block but '*Net.connect()*' is in, that makes the difference;
> h3.Temporary Solution:
> {code:title=kafka.network.BlockingChannel|borderStyle=solid} 
> try
> {
> channel.connect(new InetSocketAddress(host, port))
> }
> catch
> {
> case e: UnresolvedAddressException => 
> {
> disconnect();
> throw e
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2220) Improvement: Could we support rewind by time ?

2017-08-15 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2220.
--
Resolution: Fixed

This got fixed in  KAFKA-4743 / KIP-122.

> Improvement: Could we support  rewind by time  ?
> 
>
> Key: KAFKA-2220
> URL: https://issues.apache.org/jira/browse/KAFKA-2220
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.1.1
>Reporter: Li Junjun
> Attachments: screenshot.png
>
>
> Improvement: Support  rewind by time  !
> My scenarios as follow:
>A program read record from kafka  and process  then write to a dir in 
> HDFS like /hive/year=/month=xx/day=xx/hour=10 .  If  the program goes 
> down . I can restart it , so it read from last offset . 
> But  what if the program was config with wrong params , so I need remove  
> dir hour=10 and reconfig my program and  I  need to find  the offset where 
> hour=10 start  , but now I can't do this.
> And there are many  scenarios like this.
> so , can we  add  a time  partition , so  we can rewind by time ?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2283) scheduler exception on non-controller node when shutdown

2017-08-15 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2283.
--
Resolution: Fixed

> scheduler exception on non-controller node when shutdown
> 
>
> Key: KAFKA-2283
> URL: https://issues.apache.org/jira/browse/KAFKA-2283
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.8.2.1
> Environment: linux debian
>Reporter: allenlee
>Assignee: Neha Narkhede
>Priority: Minor
>
> When broker shutdown, there is an error log about 'Kafka scheduler has not 
> been started'.
> It only appears on non-controller node. If this broker is the controller, it 
> shutdown without warning log.
> IMHO, *autoRebalanceScheduler.shutdown()* should only valid for controller, 
> right?
> {quote}
> [2015-06-17 22:32:51,814] INFO Shutdown complete. (kafka.log.LogManager)
> [2015-06-17 22:32:51,815] WARN Kafka scheduler has not been started 
> (kafka.utils.Utils$)
> java.lang.IllegalStateException: Kafka scheduler has not been started
> at kafka.utils.KafkaScheduler.ensureStarted(KafkaScheduler.scala:114)
> at kafka.utils.KafkaScheduler.shutdown(KafkaScheduler.scala:86)
> at 
> kafka.controller.KafkaController.onControllerResignation(KafkaController.scala:350)
> at 
> kafka.controller.KafkaController.shutdown(KafkaController.scala:664)
> at 
> kafka.server.KafkaServer$$anonfun$shutdown$8.apply$mcV$sp(KafkaServer.scala:285)
> at kafka.utils.Utils$.swallow(Utils.scala:172)
> at kafka.utils.Logging$class.swallowWarn(Logging.scala:92)
> at kafka.utils.Utils$.swallowWarn(Utils.scala:45)
> at kafka.utils.Logging$class.swallow(Logging.scala:94)
> at kafka.utils.Utils$.swallow(Utils.scala:45)
> at kafka.server.KafkaServer.shutdown(KafkaServer.scala:285)
> at 
> kafka.server.KafkaServerStartable.shutdown(KafkaServerStartable.scala:42)
> at kafka.Kafka$$anon$1.run(Kafka.scala:42)
> [2015-06-17 22:32:51,818] INFO Terminate ZkClient event thread. 
> (org.I0Itec.zkclient.ZkEventThread)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-3087) Fix documentation for retention.ms property and update documentation for LogConfig.scala class

2017-08-16 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3087.
--
Resolution: Fixed

This doc issue was fixed in newer Kafka versions.

> Fix documentation for retention.ms property and update documentation for 
> LogConfig.scala class
> --
>
> Key: KAFKA-3087
> URL: https://issues.apache.org/jira/browse/KAFKA-3087
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Reporter: Raju Bairishetti
>Assignee: Jay Kreps
>Priority: Critical
>  Labels: documentation
>
> Log retention settings can be set it in broker and some properties can be 
> overriden at topic level. 
> |Property |Default|Server Default property| Description|
> |retention.ms|7 days|log.retention.minutes|This configuration controls the 
> maximum time we will retain a log before we will discard old log segments to 
> free up space if we are using the "delete" retention policy. This represents 
> an SLA on how soon consumers must read their data.|
> But retention.ms is in milli seconds not in minutes. So corresponding *Server 
> Default property* should be *log.retention.ms* instead of 
> *log.retention.minutes*.
> It would be better if we mention the if the time age is in 
> millis/minutes/hours in the documentation page and documenting in code as 
> well (Right now, it is saying *age in the code*. We should specify the *age 
> in time granularity).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-275) max.message.size is not enforced for compressed messages

2017-08-17 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-275.
-
Resolution: Fixed

This issue is fixed in latest versions.  Please reopen if the issue still 
exists. 


> max.message.size is not enforced for compressed messages
> 
>
> Key: KAFKA-275
> URL: https://issues.apache.org/jira/browse/KAFKA-275
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.7
>Reporter: Neha Narkhede
>
> The max.message.size check is not performed for compressed messages, but only 
> for each message that forms a compressed message. Due to this, even if the 
> max.message.size is set to 1MB, the producer can technically send n 1MB 
> messages as one compressed message. This can cause memory issues on the 
> server as well as deserialization issues on the consumer. The consumer's 
> fetch size has to be > max.message.size in order to be able to read data. If 
> one message is larger than the fetch.size, the consumer will throw an 
> exception and cannot proceed until the fetch.size is increased. 
> Due to this bug, even if the fetch.size > max.message.size, the consumer can 
> still get stuck on a message that is larger than max.message.size.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-359) Add message constructor that takes payload as a byte buffer

2017-08-17 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-359.
-
Resolution: Fixed

This has been fixed in newer Kafka versions.

> Add message constructor that takes payload as a byte buffer
> ---
>
> Key: KAFKA-359
> URL: https://issues.apache.org/jira/browse/KAFKA-359
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.8.0
>Reporter: Chris Riccomini
>
> Currently, if a ByteBuffer is passed into Message(), it treats the buffer as 
> the message's buffer (including magic byte, meta data, etc) rather than the 
> payload. If you wish to construct a Message and provide just the payload, you 
> have to use a byte array, which results in an extra copy if your payload data 
> is already in a byte buffer.
> For optimization, it would be nice to also provide a constructor like:
> this(payload: ByteBuffer, isPayload: Boolean)
> The existing this(buffer: ByteBuffer) constructor could then just be changed 
> to this(buffer, false).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-357) Refactor zookeeper code in KafkaZookeeper into reusable components

2017-08-17 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-357.
-
Resolution: Duplicate

Zookeeper related code is getting refactored in KAFKA-5027/KAFKA-5501

> Refactor zookeeper code in KafkaZookeeper into reusable components 
> ---
>
> Key: KAFKA-357
> URL: https://issues.apache.org/jira/browse/KAFKA-357
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Neha Narkhede
>
> Currently, we stuck a lot of zookeeper code in KafkaZookeeper. This includes 
> leader election, ISR maintenance etc. However, it will be good to wrap up 
> related code in separate components that make logical sense. A good example 
> of this is the ZKQueue data structure. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-196) Topic creation fails on large values

2017-08-17 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-196.
-
Resolution: Fixed

Topic MAX_NAME_LENGTH is set to 249 is newer Kafka verions.

> Topic creation fails on large values
> 
>
> Key: KAFKA-196
> URL: https://issues.apache.org/jira/browse/KAFKA-196
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Pierre-Yves Ritschard
> Attachments: 
> 0001-Set-a-hard-limit-on-topic-width-this-fixes-KAFKA-196.patch
>
>
> Since topic logs are stored in a directory holding the topic's name, creation 
> of the directory might fail for large strings.
> This is not a problem per-se but the exception thrown is rather cryptic and 
> hard to figure out for operations.
> I propose fixing this temporarily with a hard limit of 200 chars for topic 
> names, it would also be possible to hash the topic name.
> Another concern is that the exception raised stops the broker, effectively 
> creating  a simple DoS vector, I'm concerned about how tests or wrong client 
> library usage can take down the whole broker.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-186) no clean way to getCompressionCodec from Java-the-language

2017-08-17 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-186.
-
Resolution: Fixed

CompressionType Java class added in newer Kafka version.

> no clean way to getCompressionCodec from Java-the-language
> --
>
> Key: KAFKA-186
> URL: https://issues.apache.org/jira/browse/KAFKA-186
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.7
>Reporter: Chris Burroughs
>
> The obvious thing fails:
> CompressionCodec.getCompressionCodec(1) results in cannot find symbol
> symbol  : method getCompressionCodec(int)
> location: interface kafka.message.CompressionCodec
> Writing a switch statement with  kafka.message.NoCompressionCodec$.MODULE$ 
> and duplicating the logic in CompressionCodec.getCompressionCodec is no fun, 
> nor is creating a Hashtable just to call Utils.getCompressionCodec.  I'm not 
> sure if there is a magic keyword to make it easy for javac to understand 
> which CompressionCodec I'm referring to.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-236) Make 'autooffset.reset' accept a delay in addition to {smallest,largest}

2017-08-17 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-236.
-
Resolution: Fixed

This can be achieved by using reset consumer group tool or 
KafkaConsumer.offsetsForTimes api in latest kafka versions.

> Make 'autooffset.reset' accept a delay in addition to {smallest,largest}
> 
>
> Key: KAFKA-236
> URL: https://issues.apache.org/jira/browse/KAFKA-236
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Mathias Herberts
>
> Add the possibilty to specify a delay in ms which would be used when 
> resetting offset.
> This would allow for example a client to specify it would like its offset to 
> be reset to the first offset before/after the current time - the given offset.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-276) Enforce max.message.size on the total message size, not just on payload size

2017-08-17 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-276.
-
Resolution: Fixed

This was fixed in newer Kafka versions.

> Enforce max.message.size on the total message size, not just on payload size
> 
>
> Key: KAFKA-276
> URL: https://issues.apache.org/jira/browse/KAFKA-276
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Affects Versions: 0.7
>Reporter: Neha Narkhede
>
> Today, the max.message.size config is enforced only on the payload size of 
> the message. But the actual message size is header size + payload size.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2218) reassignment tool needs to parse and validate the json

2017-08-17 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2218.
--
Resolution: Duplicate

 PR is available for KAFKA-4914.

> reassignment tool needs to parse and validate the json
> --
>
> Key: KAFKA-2218
> URL: https://issues.apache.org/jira/browse/KAFKA-2218
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joe Stein
>Priority: Critical
>
> Ran into a production issue with the broker.id being set to a string instead 
> of integer and the controller had nothing in the log and stayed stuck. 
> Eventually we saw this in the log of the brokers where coming from 
>   
> me11:42 AM
> [2015-05-23 15:41:05,863] 67396362 [ZkClient-EventThread-14-ERROR 
> org.I0Itec.zkclient.ZkEventThread - Error handling event ZkEvent[Data of 
> /admin/reassign_partitions changed sent to 
> kafka.controller.PartitionsReassignedListener@78c6aab8]
> java.lang.ClassCastException: java.lang.String cannot be cast to 
> java.lang.Integer
>  at scala.runtime.BoxesRunTime.unboxToInt(Unknown Source)
>  at 
> kafka.controller.KafkaController$$anonfun$4.apply(KafkaController.scala:579)
> we then had to delete the znode from zookeeper (admin/reassign_partition) and 
> then fix the json and try it again



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-3389) ReplicaStateMachine areAllReplicasForTopicDeleted check not handling well case when there are no replicas for topic

2017-08-10 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3389.
--
Resolution: Won't Fix

As mentioned in the previous comment, this may not be an issue.  Pl reopen if 
still exists

> ReplicaStateMachine areAllReplicasForTopicDeleted check not handling well 
> case when there are no replicas for topic
> ---
>
> Key: KAFKA-3389
> URL: https://issues.apache.org/jira/browse/KAFKA-3389
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.9.0.1
>Reporter: Stevo Slavic
>Assignee: Manikumar
>Priority: Minor
>
> Line ReplicaStateMachine.scala#L285
> {noformat}
> replicaStatesForTopic.forall(_._2 == ReplicaDeletionSuccessful)
> {noformat}
> which is return value of {{areAllReplicasForTopicDeleted}} function/check, 
> probably should better be checking for
> {noformat}
> replicaStatesForTopic.isEmpty || replicaStatesForTopic.forall(_._2 == 
> ReplicaDeletionSuccessful)
> {noformat}
> I noticed it because in controller logs I found entries like:
> {noformat}
> [2016-03-04 13:27:29,115] DEBUG [Replica state machine on controller 1]: Are 
> all replicas for topic foo deleted Map() 
> (kafka.controller.ReplicaStateMachine)
> {noformat}
> even though normally they look like:
> {noformat}
> [2016-03-04 09:33:41,036] DEBUG [Replica state machine on controller 1]: Are 
> all replicas for topic foo deleted Map([Topic=foo,Partition=0,Replica=0] -> 
> ReplicaDeletionStarted, [Topic=foo,Partition=0,Replica=3] -> 
> ReplicaDeletionStarted, [Topic=foo,Partition=0,Replica=1] -> 
> ReplicaDeletionSuccessful) (kafka.controller.ReplicaStateMachine)
> {noformat}
> This may cause topic deletion request never to be cleared from ZK even when 
> topic has been deleted.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5644) Transient test failure: ResetConsumerGroupOffsetTest.testResetOffsetsToZonedDateTime

2017-07-26 Thread Manikumar (JIRA)
Manikumar created KAFKA-5644:


 Summary: Transient test failure: 
ResetConsumerGroupOffsetTest.testResetOffsetsToZonedDateTime
 Key: KAFKA-5644
 URL: https://issues.apache.org/jira/browse/KAFKA-5644
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.11.0.0
Reporter: Manikumar
Priority: Minor


{quote}
unit.kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToZonedDateTime 
FAILED
java.lang.AssertionError: Expected the consumer group to reset to when 
offset was 50.
at kafka.utils.TestUtils$.fail(TestUtils.scala:339)
at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:853)
at 
unit.kafka.admin.ResetConsumerGroupOffsetTest.testResetOffsetsToZonedDateTime(ResetConsumerGroupOffsetTest.scala:188)
{quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4889) 2G8lc

2017-08-16 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4889.
--
Resolution: Invalid

> 2G8lc
> -
>
> Key: KAFKA-4889
> URL: https://issues.apache.org/jira/browse/KAFKA-4889
> Project: Kafka
>  Issue Type: Task
>Reporter: Vamsi Jakkula
>
> Creating of an issue using project keys and issue type names using the REST 
> API



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4951) KafkaProducer may send duplicated message sometimes

2017-08-16 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4951.
--
Resolution: Fixed

This scenario is handled in the Idempotent producer (KIP-98) released in Kafka 
0.11.0.0.  Pl reopen if you think the issue still exists

> KafkaProducer may send duplicated message sometimes
> ---
>
> Key: KAFKA-4951
> URL: https://issues.apache.org/jira/browse/KAFKA-4951
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.1
>Reporter: cuiyang
>
> I foud that KafkaProducer may send duplicated message sometimes, which is 
> happend when:
>  In Sender thread:
>  NetworkClient::poll()
>  -> this.selector.poll()//send the message, such as "abc", and 
> send it to broker successfully
>  -> handleTimedOutRequests(responses,updatedNow);  //Judge whether 
> the message  "abc" which is sent above is expired or timeout,  and the judge 
> is  based on the parameter  this.requestTimeoutMs and updatedNow;  
>  -> response.request().callback().onComplete()
>  -> 
> completeBatch(batch,Errors.NETWORK_EXCEPTION,-1L,correlationId,now);   //If 
> themessage was judged as expired, then it will be reenqueued and send 
> repeatly next loop;
>  -> this.accumulator.reenqueue(batch,now);
> The problem comes out:  If the message "abc" is sent successfully to broker, 
> but it may be judged to expired, so the message will be sent repeately next 
> loop, which make the message duplicated.
> I can reproduce this scenario normally.
> In my opinion, I think Send::handleTimedOutRequests() is not much useful, 
> because the response of sending request from broker is succesfully and has no 
> error, which means brokers persist it successfully. And this function  will 
> induce to the duplicated message problems.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4865) 2X8BF

2017-08-16 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4865.
--
Resolution: Invalid

> 2X8BF
> -
>
> Key: KAFKA-4865
> URL: https://issues.apache.org/jira/browse/KAFKA-4865
> Project: Kafka
>  Issue Type: Bug
>Reporter: Vamsi Jakkula
>
> Creating of an issue using project keys and issue type names using the REST 
> API



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4847) 1Y30J

2017-08-16 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4847.
--
Resolution: Invalid

> 1Y30J
> -
>
> Key: KAFKA-4847
> URL: https://issues.apache.org/jira/browse/KAFKA-4847
> Project: Kafka
>  Issue Type: Bug
>Reporter: Vamsi Jakkula
>
> Creating of an issue using project keys and issue type names using the REST 
> API



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4803) OT6Y1

2017-08-16 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4803.
--
Resolution: Invalid

> OT6Y1
> -
>
> Key: KAFKA-4803
> URL: https://issues.apache.org/jira/browse/KAFKA-4803
> Project: Kafka
>  Issue Type: Bug
>Reporter: Vamsi Jakkula
>
> Creating of an issue using project keys and issue type names using the REST 
> API



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4813) 2h6R1

2017-08-16 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4813.
--
Resolution: Invalid

> 2h6R1
> -
>
> Key: KAFKA-4813
> URL: https://issues.apache.org/jira/browse/KAFKA-4813
> Project: Kafka
>  Issue Type: Bug
>Reporter: Vamsi Jakkula
>
> Creating of an issue using project keys and issue type names using the REST 
> API



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4804) TdOZY

2017-08-16 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4804.
--
Resolution: Invalid

> TdOZY
> -
>
> Key: KAFKA-4804
> URL: https://issues.apache.org/jira/browse/KAFKA-4804
> Project: Kafka
>  Issue Type: Bug
>Reporter: Vamsi Jakkula
>
> Creating of an issue using project keys and issue type names using the REST 
> API



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4821) 9244L

2017-08-16 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4821.
--
Resolution: Invalid

> 9244L
> -
>
> Key: KAFKA-4821
> URL: https://issues.apache.org/jira/browse/KAFKA-4821
> Project: Kafka
>  Issue Type: Task
>Reporter: Vamsi Jakkula
>
> Creating of an issue using project keys and issue type names using the REST 
> API



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-3796) SslTransportLayerTest.testInvalidEndpointIdentification fails on trunk

2017-08-16 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3796.
--
Resolution: Cannot Reproduce

 Pl reopen if you think the issue still exists

> SslTransportLayerTest.testInvalidEndpointIdentification fails on trunk
> --
>
> Key: KAFKA-3796
> URL: https://issues.apache.org/jira/browse/KAFKA-3796
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients, security
>Affects Versions: 0.10.0.0
>Reporter: Rekha Joshi
>Assignee: Rekha Joshi
>Priority: Trivial
>
> org.apache.kafka.common.network.SslTransportLayerTest > 
> testEndpointIdentificationDisabled FAILED
> java.net.BindException: Can't assign requested address
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
> at 
> org.apache.kafka.common.network.NioEchoServer.(NioEchoServer.java:48)
> at 
> org.apache.kafka.common.network.SslTransportLayerTest.testEndpointIdentificationDisabled(SslTransportLayerTest.java:120)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5737) KafkaAdminClient thread should be daemon

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5737.
--
   Resolution: Fixed
Fix Version/s: 1.0.0
   0.11.0.1

> KafkaAdminClient thread should be daemon
> 
>
> Key: KAFKA-5737
> URL: https://issues.apache.org/jira/browse/KAFKA-5737
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.11.0.0
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
> Fix For: 0.11.0.1, 1.0.0
>
>
> The admin client thread should be daemon, for consistency with the consumer 
> and producer threads.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-3322) recurring errors

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3322.
--
Resolution: Won't Fix

 Pl reopen if you think the issue still exists


> recurring errors
> 
>
> Key: KAFKA-3322
> URL: https://issues.apache.org/jira/browse/KAFKA-3322
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
> Environment: kafka0.9.0 and zookeeper 3.4.6
>Reporter: jackie
>
> we're getting hundreds of these errs with kafka 0.8 and topics become 
> unavailable after running for a few days.  it looks like this 
> https://issues.apache.org/jira/browse/KAFKA-1314



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4078) VIP for Kafka doesn't work

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4078.
--
Resolution: Cannot Reproduce

 Pl reopen if you think the issue still exists


> VIP for Kafka  doesn't work 
> 
>
> Key: KAFKA-4078
> URL: https://issues.apache.org/jira/browse/KAFKA-4078
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.1
>Reporter: chao
>
> We create VIP for chao007kfk002.chao007.com, 9092 ,chao007kfk003.chao007.com, 
> 9092 ,chao007kfk001.chao007.com, 9092
> But we found that Kafka client API has some issues ,  client send metadata 
> update will return three brokers ,  so it will create three connections for 
> 001 002 003 
> When we change VIP to  chao008kfk002.chao008.com, 9092 
> ,chao008kfk003.chao008.com, 9092 ,chao008kfk001.chao008.com, 9092
> it still produce data to 007 
> The following is log information  
> sasl.kerberos.ticket.renew.window.factor = 0.8
> bootstrap.servers = [kfk.chao.com:9092]
> client.id = 
> 2016-08-23 07:00:48,451:DEBUG kafka-producer-network-thread | producer-1 
> (NetworkClient.java:623) - Initialize connection to node -1 for sending 
> metadata request
> 2016-08-23 07:00:48,452:DEBUG kafka-producer-network-thread | producer-1 
> (NetworkClient.java:487) - Initiating connection to node -1 at 
> kfk.chao.com:9092.
> 2016-08-23 07:00:48,463:DEBUG kafka-producer-network-thread | producer-1 
> (Metrics.java:201) - Added sensor with name node--1.bytes-sent
>   
>   
> 2016-08-23 07:00:48,489:DEBUG kafka-producer-network-thread | producer-1 
> (NetworkClient.java:619) - Sending metadata request 
> ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=0,client_id=producer-1},
>  body={topics=[chao_vip]}), isInitiatedByNetworkClient, 
> createdTimeMs=1471935648465, sendTimeMs=0) to node -1
> 2016-08-23 07:00:48,512:DEBUG kafka-producer-network-thread | producer-1 
> (Metadata.java:172) - Updated cluster metadata version 2 to Cluster(nodes = 
> [Node(1, chao007kfk002.chao007.com, 9092), Node(2, chao007kfk003.chao007.com, 
> 9092), Node(0, chao007kfk001.chao007.com, 9092)], partitions = 
> [Partition(topic = chao_vip, partition = 0, leader = 0, replicas = [0,], isr 
> = [0,], Partition(topic = chao_vip, partition = 3, leader = 0, replicas = 
> [0,], isr = [0,], Partition(topic = chao_vip, partition = 2, leader = 2, 
> replicas = [2,], isr = [2,], Partition(topic = chao_vip, partition = 1, 
> leader = 1, replicas = [1,], isr = [1,], Partition(topic = chao_vip, 
> partition = 4, leader = 1, replicas = [1,], isr = [1,]])



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-3951) kafka.common.KafkaStorageException: I/O exception in append to log

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3951.
--
Resolution: Cannot Reproduce

 Pl reopen if you think the issue still exists


> kafka.common.KafkaStorageException: I/O exception in append to log
> --
>
> Key: KAFKA-3951
> URL: https://issues.apache.org/jira/browse/KAFKA-3951
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.9.0.1
>Reporter: wanzi.zhao
> Attachments: server-1.properties, server.properties
>
>
> I have two brokers in the same server using two ports,10.45.33.195:9092 and 
> 10.45.33.195:9093.They use two log directory "log.dirs=/tmp/kafka-logs" and 
> "log.dirs=/tmp/kafka-logs-1".When I shutdown my consumer application(java 
> api)  then change a groupId and restart it,my kafka brokers will stop 
> working, this is the stack trace I get
> [2016-07-11 17:02:47,314] INFO [Group Metadata Manager on Broker 0]: Loading 
> offsets and group metadata from [__consumer_offsets,0] 
> (kafka.coordinator.GroupMetadataManager)
> [2016-07-11 17:02:47,955] FATAL [Replica Manager on Broker 0]: Halting due to 
> unrecoverable I/O error while handling produce request:  
> (kafka.server.ReplicaManager)
> kafka.common.KafkaStorageException: I/O exception in append to log 
> '__consumer_offsets-38'
> at kafka.log.Log.append(Log.scala:318)
> at kafka.cluster.Partition$$anonfun$9.apply(Partition.scala:442)
> at kafka.cluster.Partition$$anonfun$9.apply(Partition.scala:428)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
> at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:268)
> at kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:428)
> at 
> kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:401)
> at 
> kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:386)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
> at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
> at scala.collection.AbstractTraversable.map(Traversable.scala:105)
> at 
> kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:386)
> at 
> kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:322)
> at 
> kafka.coordinator.GroupMetadataManager.store(GroupMetadataManager.scala:228)
> at 
> kafka.coordinator.GroupCoordinator$$anonfun$handleCommitOffsets$9.apply(GroupCoordinator.scala:429)
> at 
> kafka.coordinator.GroupCoordinator$$anonfun$handleCommitOffsets$9.apply(GroupCoordinator.scala:429)
> at scala.Option.foreach(Option.scala:236)
> at 
> kafka.coordinator.GroupCoordinator.handleCommitOffsets(GroupCoordinator.scala:429)
> at 
> kafka.server.KafkaApis.handleOffsetCommitRequest(KafkaApis.scala:280)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:76)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.FileNotFoundException: 
> /tmp/kafka-logs/__consumer_offsets-38/.index (No such 
> file or directory)
> at java.io.RandomAccessFile.open0(Native Method)
> at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
> at java.io.RandomAccessFile.(RandomAccessFile.java:243)
> at 
> kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:277)
> at 
> kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:276)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
> at kafka.log.OffsetIndex.resize(OffsetIndex.scala:276)
> at 
> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply$mcV$sp(OffsetIndex.scala:265)
> at 
> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply(OffsetIndex.scala:265)
> at 
> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply(OffsetIndex.scala:265)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
> at kafka.log.OffsetIndex.trimToValidSize(OffsetIndex.scala:264)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-1495) Kafka Example SimpleConsumerDemo

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1495.
--
Resolution: Won't Fix

> Kafka Example SimpleConsumerDemo 
> -
>
> Key: KAFKA-1495
> URL: https://issues.apache.org/jira/browse/KAFKA-1495
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.1.1
> Environment: Mac OS
>Reporter: darion yaphet
>Assignee: Jun Rao
>
> Offical SimpleConsumerDemo  under 
> kafka-0.8.1.1-src/examples/src/main/java/kafka/examples  running on my 
> machine . I found  under /tmp/kafka-logs has two directory  topic2-0 and 
> topic2-1  and 
> one is empty 
> ➜  kafka-logs  ls -lF  topic2-0  topic2-1
> topic2-0:
> total 21752
> -rw-r--r--  1 2011204  wheel  10485760  6 17 17:34 .index
> -rw-r--r--  1 2011204  wheel651109  6 17 18:44 .log
> topic2-1:
> total 20480
> -rw-r--r--  1 2011204  wheel  10485760  6 17 17:34 .index
> -rw-r--r--  1 2011204  wheel 0  6 17 17:34 .log 
> Is it a bug  or  something should  config in source code?
> thank you 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2231) Deleting a topic fails

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2231.
--
Resolution: Cannot Reproduce

Topic deletion is more stable in latest releases. Pl reopen if you think the 
issue still exists

> Deleting a topic fails
> --
>
> Key: KAFKA-2231
> URL: https://issues.apache.org/jira/browse/KAFKA-2231
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1
> Environment: Windows 8.1
>Reporter: James G. Haberly
>Priority: Minor
>
> delete.topic.enable=true is in config\server.properties.
> Using --list shows the topic "marked for deletion".
> Stopping and restarting kafka and zookeeper does not delete the topic; it 
> remains "marked for deletion".
> Trying to recreate the topic fails with "Topic XXX already exists".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-3953) start kafka fail

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3953.
--
Resolution: Cannot Reproduce

 Pl reopen if you think the issue still exists


> start kafka fail
> 
>
> Key: KAFKA-3953
> URL: https://issues.apache.org/jira/browse/KAFKA-3953
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.2
> Environment: Linux host-172-28-0-3 3.10.0-327.18.2.el7.x86_64 #1 SMP 
> Thu May 12 11:03:55 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: ffh
>
> kafka start fail. error messege:
> [2016-07-12 03:57:32,717] FATAL [Kafka Server 0], Fatal error during 
> KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
> java.util.NoSuchElementException: None.get
>   at scala.None$.get(Option.scala:313)
>   at scala.None$.get(Option.scala:311)
>   at kafka.controller.KafkaController.clientId(KafkaController.scala:215)
>   at 
> kafka.controller.ControllerBrokerRequestBatch.(ControllerChannelManager.scala:189)
>   at 
> kafka.controller.PartitionStateMachine.(PartitionStateMachine.scala:48)
>   at kafka.controller.KafkaController.(KafkaController.scala:156)
>   at kafka.server.KafkaServer.startup(KafkaServer.scala:148)
>   at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:29)
>   at kafka.Kafka$.main(Kafka.scala:72)
>   at kafka.Kafka.main(Kafka.scala)
> [2016-07-12 03:57:33,124] FATAL Fatal error during KafkaServerStartable 
> startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
> java.util.NoSuchElementException: None.get
>   at scala.None$.get(Option.scala:313)
>   at scala.None$.get(Option.scala:311)
>   at kafka.controller.KafkaController.clientId(KafkaController.scala:215)
>   at 
> kafka.controller.ControllerBrokerRequestBatch.(ControllerChannelManager.scala:189)
>   at 
> kafka.controller.PartitionStateMachine.(PartitionStateMachine.scala:48)
>   at kafka.controller.KafkaController.(KafkaController.scala:156)
>   at kafka.server.KafkaServer.startup(KafkaServer.scala:148)
>   at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:29)
>   at kafka.Kafka$.main(Kafka.scala:72)
>   at kafka.Kafka.main(Kafka.scala)
> config:
> # Generated by Apache Ambari. Tue Jul 12 03:18:02 2016
> authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
> auto.create.topics.enable=true
> auto.leader.rebalance.enable=true
> broker.id=0
> compression.type=producer
> controlled.shutdown.enable=true
> controlled.shutdown.max.retries=3
> controlled.shutdown.retry.backoff.ms=5000
> controller.message.queue.size=10
> controller.socket.timeout.ms=3
> default.replication.factor=1
> delete.topic.enable=false
> external.kafka.metrics.exclude.prefix=kafka.network.RequestMetrics,kafka.server.DelayedOperationPurgatory,kafka.server.BrokerTopicMetrics.BytesRejectedPerSec
> external.kafka.metrics.include.prefix=kafka.network.RequestMetrics.ResponseQueueTimeMs.request.OffsetCommit.98percentile,kafka.network.RequestMetrics.ResponseQueueTimeMs.request.Offsets.95percentile,kafka.network.RequestMetrics.ResponseSendTimeMs.request.Fetch.95percentile,kafka.network.RequestMetrics.RequestsPerSec.request
> fetch.purgatory.purge.interval.requests=1
> kafka.ganglia.metrics.group=kafka
> kafka.ganglia.metrics.host=localhost
> kafka.ganglia.metrics.port=8671
> kafka.ganglia.metrics.reporter.enabled=true
> kafka.metrics.reporters=
> kafka.timeline.metrics.host=
> kafka.timeline.metrics.maxRowCacheSize=1
> kafka.timeline.metrics.port=
> kafka.timeline.metrics.reporter.enabled=true
> kafka.timeline.metrics.reporter.sendInterval=5900
> leader.imbalance.check.interval.seconds=300
> leader.imbalance.per.broker.percentage=10
> listeners=PLAINTEXT://host-172-28-0-3:6667
> log.cleanup.interval.mins=10
> log.dirs=/kafka-logs
> log.index.interval.bytes=4096
> log.index.size.max.bytes=10485760
> log.retention.bytes=-1
> log.retention.hours=168
> log.roll.hours=168
> log.segment.bytes=1073741824
> message.max.bytes=100
> min.insync.replicas=1
> num.io.threads=8
> num.network.threads=3
> num.partitions=1
> num.recovery.threads.per.data.dir=1
> num.replica.fetchers=1
> offset.metadata.max.bytes=4096
> offsets.commit.required.acks=-1
> offsets.commit.timeout.ms=5000
> offsets.load.buffer.size=5242880
> offsets.retention.check.interval.ms=60
> offsets.retention.minutes=8640
> offsets.topic.compression.codec=0
> offsets.topic.num.partitions=50
> offsets.topic.replication.factor=3
> offsets.topic.segment.bytes=104857600
> principal.to.local.class=kafka.security.auth.KerberosPrincipalToLocal
> producer.purgatory.purge.interval.requests=1
> queued.max.requests=500
> replica.fetch.max.bytes=1048576
> replica.fetch.min.bytes=1
> replica.fetch.wait.max.ms=500
> 

[jira] [Resolved] (KAFKA-3327) Warning from kafka mirror maker about ssl properties not valid

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3327.
--
Resolution: Cannot Reproduce

mostly related to config issue.  Pl reopen if you think the issue still exists


> Warning from kafka mirror maker about ssl properties not valid
> --
>
> Key: KAFKA-3327
> URL: https://issues.apache.org/jira/browse/KAFKA-3327
> Project: Kafka
>  Issue Type: Test
>  Components: config
>Affects Versions: 0.9.0.1
> Environment: CentOS release 6.5
>Reporter: Munir Khan
>Priority: Minor
>  Labels: kafka, mirror-maker, ssl
>
> I am trying to run Mirror maker  over SSL. I have configured my broker 
> following the procedure described in this document 
> http://kafka.apache.org/documentation.html#security_overview 
> I get the following warning when I start the mirror maker:
> [root@munkhan-kafka1 kafka_2.10-0.9.0.1]# bin/kafka-run-class.sh 
> kafka.tools.MirrorMaker --consumer.config 
> config/datapush-consumer-ssl.properties --producer.config 
> config/datapush-producer-ssl.properties --num.streams 2 --whitelist test1&
> [1] 4701
> [root@munkhan-kafka1 kafka_2.10-0.9.0.1]# [2016-03-03 10:24:35,348] WARN 
> block.on.buffer.full config is deprecated and will be removed soon. Please 
> use max.block.ms (org.apache.kafka.clients.producer.KafkaProducer)
> [2016-03-03 10:24:35,523] WARN The configuration producer.type = sync was 
> supplied but isn't a known config. 
> (org.apache.kafka.clients.producer.ProducerConfig)
> [2016-03-03 10:24:35,523] WARN The configuration ssl.keypassword = test1234 
> was supplied but isn't a known config. 
> (org.apache.kafka.clients.producer.ProducerConfig)
> [2016-03-03 10:24:35,523] WARN The configuration compression.codec = none was 
> supplied but isn't a known config. 
> (org.apache.kafka.clients.producer.ProducerConfig)
> [2016-03-03 10:24:35,523] WARN The configuration serializer.class = 
> kafka.serializer.DefaultEncoder was supplied but isn't a known config. 
> (org.apache.kafka.clients.producer.ProducerConfig)
> [2016-03-03 10:24:35,617] WARN Property security.protocol is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:35,617] WARN Property ssl.keypassword is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:35,617] WARN Property ssl.keystore.location is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:35,618] WARN Property ssl.keystore.password is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:35,618] WARN Property ssl.truststore.location is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:35,618] WARN Property ssl.truststore.password is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:35,752] WARN Property security.protocol is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:35,752] WARN Property ssl.keypassword is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:35,752] WARN Property ssl.keystore.location is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:35,752] WARN Property ssl.keystore.password is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:35,752] WARN Property ssl.truststore.location is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:35,753] WARN Property ssl.truststore.password is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:36,251] WARN No broker partitions consumed by consumer 
> thread test-consumer-group_munkhan-kafka1.cisco.com-1457018675755-b9bb4c75-0 
> for topic test1 (kafka.consumer.RangeAssignor)
> [2016-03-03 10:24:36,251] WARN No broker partitions consumed by consumer 
> thread test-consumer-group_munkhan-kafka1.cisco.com-1457018675755-b9bb4c75-0 
> for topic test1 (kafka.consumer.RangeAssignor)
> However the Mirror maker is able to mirror data . If I remove the 
> configurations related to the warning messages from my producer  mirror maker 
> does not work . So it seems despite the warning shown above the 
> ssl.configuration properties are used somehow. 
> My question is these are those warnings harmless in this context ?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2093) Remove logging error if we throw exception

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2093.
--
Resolution: Won't Fix

Scala producer is deprecated. Pl reopen if you think the issue still exists


> Remove logging error if we throw exception
> --
>
> Key: KAFKA-2093
> URL: https://issues.apache.org/jira/browse/KAFKA-2093
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ivan Balashov
>Priority: Trivial
>
> On failure, kafka producer logs error AND throws exception. This can pose 
> problems, since client application cannot flexibly control if a particular 
> exception should be logged, and logging becomes all-or-nothing choice for 
> particular logger.
> We must remove logging error if we decide to throw exception.
> Some examples of this:
> kafka.client.ClientUtils$:89
> kafka.producer.SyncProducer:103
> If no one has objections, I can search around for other cases of logging + 
> throwing which should also be fixed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2053) Make initZk a protected function

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2053.
--
Resolution: Won't Fix

 Pl reopen if you think the requirement still exists

> Make initZk a protected function
> 
>
> Key: KAFKA-2053
> URL: https://issues.apache.org/jira/browse/KAFKA-2053
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.8.2.0
>Reporter: Christian Kampka
>Priority: Minor
> Attachments: make-initzk-protected
>
>
> In our environment, we have established an external procedure to notify 
> clients of changes in the zookeeper cluster configuration, especially 
> appearance and disappearance of nodes. it has also become quite common to run 
> Kafka as an embedded service (especially in tests).
> When doing so, it would makes things easier if it were possible to manipulate 
> the creation of the zookeeper client to supply Kafka with a specialized 
> ZooKeeper client that is adjusted to our needs but of course API compatible 
> with the ZkClient.
> Therefore, I would like to propose to make the initZk method protected so we 
> will be able to simply override it for client creation. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-3413) Load Error Message should be a Warning

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3413.
--
Resolution: Won't Fix

 Pl reopen if you think the issue still exists


> Load Error Message should be a Warning
> --
>
> Key: KAFKA-3413
> URL: https://issues.apache.org/jira/browse/KAFKA-3413
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer, offset manager
>Affects Versions: 0.9.0.0, 0.9.0.1
>Reporter: Scott Reynolds
>Assignee: Neha Narkhede
>
> There is a Error message from AbstractReplicaFetcherThread that isn't really 
> an error.
> Each implementation
> of this thread can logs out when an error or fatal error occurs.
> ReplicaFetcherThread, has both warn, error and fatal in the
> handleOffsetOutOfRange method.
> ConsumerFetcherThread seems to reset itself without logging out an error.
> Seems that the Reset message isn't shouldn't be an error level as it
> doesn't indicate any real error.
> This patch makes it a warning: 
> https://github.com/apache/kafka/compare/trunk...SupermanScott:offset-reset-to-warn?diff=split=offset-reset-to-warn



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2296) Not able to delete topic on latest kafka

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2296.
--
Resolution: Duplicate

> Not able to delete topic on latest kafka
> 
>
> Key: KAFKA-2296
> URL: https://issues.apache.org/jira/browse/KAFKA-2296
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1
>Reporter: Andrew M
>
> Was able to reproduce [inability to delete 
> topic|https://issues.apache.org/jira/browse/KAFKA-1397?focusedCommentId=14491442=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14491442]
>  on running cluster with kafka 0.8.2.1.
> Cluster consist from 2 c3.xlarge aws instances with sufficient storage 
> attached. All communication between nodes goes through aws vpc
> Some warns from logs:
> {noformat}[Controller-1234-to-broker-4321-send-thread], Controller 1234 epoch 
> 20 fails to send request 
> Name:UpdateMetadataRequest;Version:0;Controller:1234;ControllerEpoch:20;CorrelationId:24047;ClientId:id_1234-host_1.2.3.4-port_6667;AliveBrokers:id:1234,host:1.2.3.4,port:6667,id:4321,host:4.3.2.1,port:6667;PartitionState:[topic_name,45]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:4321,1234,LeaderEpoch:0,ControllerEpoch:19),ReplicationFactor:2),AllReplicas:1234,4321),[topic_name,27]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:4321,1234,LeaderEpoch:0,ControllerEpoch:19),ReplicationFactor:2),AllReplicas:1234,4321),[topic_name,17]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:4321,1234,LeaderEpoch:0,ControllerEpoch:19),ReplicationFactor:2),AllReplicas:1234,4321),[topic_name,49]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:4321,1234,LeaderEpoch:0,ControllerEpoch:19),ReplicationFactor:2),AllReplicas:1234,4321),[topic_name,7]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:4321,1234,LeaderEpoch:0,ControllerEpoch:19),ReplicationFactor:2),AllReplicas:1234,4321),[topic_name,26]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,62]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,18]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,36]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,29]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:4321,1234,LeaderEpoch:0,ControllerEpoch:19),ReplicationFactor:2),AllReplicas:1234,4321),[topic_name,53]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:4321,1234,LeaderEpoch:0,ControllerEpoch:19),ReplicationFactor:2),AllReplicas:1234,4321),[topic_name,52]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,2]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,12]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,33]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:4321,1234,LeaderEpoch:0,ControllerEpoch:19),ReplicationFactor:2),AllReplicas:1234,4321),[topic_name,14]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,63]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:4321,1234,LeaderEpoch:0,ControllerEpoch:19),ReplicationFactor:2),AllReplicas:1234,4321),[topic_name,30]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,6]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,28]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,38]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,24]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,31]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:4321,1234,LeaderEpoch:0,ControllerEpoch:19),ReplicationFactor:2),AllReplicas:1234,4321),[topic_name,4]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,20]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,54]
>  -> 
> 

[jira] [Resolved] (KAFKA-2289) KafkaProducer logs erroneous warning on startup

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2289.
--
Resolution: Fixed

This has been fixed.

> KafkaProducer logs erroneous warning on startup
> ---
>
> Key: KAFKA-2289
> URL: https://issues.apache.org/jira/browse/KAFKA-2289
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.8.2.1
>Reporter: Henning Schmiedehausen
>Priority: Trivial
>
> When creating a new KafkaProducer using the 
> KafkaProducer(KafkaConfig, Serializer, Serializer) constructor, Kafka 
> will list the following lines, which are harmless but are still at WARN level:
> WARN  [2015-06-19 23:13:56,557] 
> org.apache.kafka.clients.producer.ProducerConfig: The configuration 
> value.serializer = class  was supplied but isn't a known config.
> WARN  [2015-06-19 23:13:56,557] 
> org.apache.kafka.clients.producer.ProducerConfig: The configuration 
> key.serializer = class  was supplied but isn't a known config.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-3927) kafka broker config docs issue

2017-08-22 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3927.
--
Resolution: Later

Yes, These changes are done in KAFKA-615.  Please reopen if the issue still 
exists. 


> kafka broker config docs issue
> --
>
> Key: KAFKA-3927
> URL: https://issues.apache.org/jira/browse/KAFKA-3927
> Project: Kafka
>  Issue Type: Bug
>  Components: website
>Affects Versions: 0.10.0.0
>Reporter: Shawn Guo
>Priority: Minor
>
> https://kafka.apache.org/documentation.html#brokerconfigs
> log.flush.interval.messages 
> default value is "9223372036854775807"
> log.flush.interval.ms 
> default value is null
> log.flush.scheduler.interval.ms 
> default value is "9223372036854775807"
> etc. obviously these default values are incorrect. how these doc get 
> generated ? it looks confusing. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5401) Kafka Over TLS Error - Failed to send SSL Close message - Broken Pipe

2017-08-22 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5401.
--
Resolution: Duplicate

> Kafka Over TLS Error - Failed to send SSL Close message - Broken Pipe
> -
>
> Key: KAFKA-5401
> URL: https://issues.apache.org/jira/browse/KAFKA-5401
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.1.1
> Environment: SLES 11 , Kakaf Over TLS 
>Reporter: PaVan
>  Labels: security
>
> SLES 11 
> WARN Failed to send SSL Close message 
> (org.apache.kafka.common.network.SslTransportLayer)
> java.io.IOException: Broken pipe
>  at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
>  at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
>  at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
>  at sun.nio.ch.IOUtil.write(IOUtil.java:65)
>  at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471)
>  at 
> org.apache.kafka.common.network.SslTransportLayer.flush(SslTransportLayer.java:194)
>  at 
> org.apache.kafka.common.network.SslTransportLayer.close(SslTransportLayer.java:148)
>  at 
> org.apache.kafka.common.network.KafkaChannel.close(KafkaChannel.java:45)
>  at org.apache.kafka.common.network.Selector.close(Selector.java:442)
>  at org.apache.kafka.common.network.Selector.poll(Selector.java:310)
>  at 
> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:256)
>  at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:216)
>  at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:128)
>  at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5239) Producer buffer pool allocates memory inside a lock.

2017-08-22 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5239.
--
   Resolution: Fixed
 Assignee: Sean McCauliff
Fix Version/s: 1.0.0

> Producer buffer pool allocates memory inside a lock.
> 
>
> Key: KAFKA-5239
> URL: https://issues.apache.org/jira/browse/KAFKA-5239
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: Sean McCauliff
>Assignee: Sean McCauliff
> Fix For: 1.0.0
>
>
> KAFKA-4840 placed the ByteBuffer allocation inside the critical section.  
> Previously byte buffer allocation happened outside of the critical section.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-3800) java client can`t poll msg

2017-08-22 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3800.
--
Resolution: Cannot Reproduce

 Please reopen if the issue still exists. 


> java client can`t poll msg
> --
>
> Key: KAFKA-3800
> URL: https://issues.apache.org/jira/browse/KAFKA-3800
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.1
> Environment: java8,win7 64
>Reporter: frank
>Assignee: Neha Narkhede
>
> i use hump topic name, after poll msg is null.eg: Test_4 why?
> all low char is ok. i`m try nodejs,kafka-console-consumers.bat is ok



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-3653) expose the queue size in ControllerChannelManager

2017-08-22 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3653.
--
Resolution: Fixed

Fixed in KAFKA-5135/KIP-143

> expose the queue size in ControllerChannelManager
> -
>
> Key: KAFKA-3653
> URL: https://issues.apache.org/jira/browse/KAFKA-3653
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jun Rao
>Assignee: Gwen Shapira
>
> Currently, ControllerChannelManager maintains a queue per broker. If the 
> queue fills up, metadata propagation to the broker is delayed. It would be 
> useful to expose a metric on the size on the queue for monitoring.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-3139) JMX metric ProducerRequestPurgatory doesn't exist, docs seem wrong.

2017-08-22 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3139.
--
Resolution: Fixed

Fixed in KAFKA-4252

> JMX metric ProducerRequestPurgatory doesn't exist, docs seem wrong.
> ---
>
> Key: KAFKA-3139
> URL: https://issues.apache.org/jira/browse/KAFKA-3139
> Project: Kafka
>  Issue Type: Bug
>Reporter: James Cheng
>
> The docs say that there is a JMX metric 
> {noformat}
> kafka.server:type=ProducerRequestPurgatory,name=PurgatorySize
> {noformat}
> But that doesn't seem to work. Using jconsole to inspect our kafka broker, it 
> seems like the right metric is
> {noformat}
> kafka.server:type=DelayedOperationPurgatory,name=PurgatorySize,delayedOperation=Produce
> {noformat}
> And there are also variants of the above for Fetch, Heartbeat, and Rebalance.
> Is the fix to simply update the docs? From what I can see, the docs for this 
> don't seem auto-generated from code. If it is a simple doc fix, I would like 
> to take this JIRA.
> Btw, what is NumDelayedOperations, and how is it different from PurgatorySize?
> {noformat}
> kafka.server:type=DelayedOperationPurgatory,name=NumDelayedOperations,delayedOperation=Produce
> {noformat}
> And should I also update the docs for that?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4823) Creating Kafka Producer on application running on Java older version

2017-08-22 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4823.
--
Resolution: Won't Fix

Kafka Dropped support for Java 1.6 from 0.9 release. You can try Rest 
Proxy/Other language libraries.  Please reopen if you think otherwise

> Creating Kafka Producer on application running on Java older version
> 
>
> Key: KAFKA-4823
> URL: https://issues.apache.org/jira/browse/KAFKA-4823
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.1
> Environment: Application running on Java 1.6
>Reporter: live2code
>
> I have an application running on Java 1.6 which cannot be upgraded.This 
> application need to have interfaces to post (producer )messages to Kafka 
> server remote box.Also receive messages as consumer .The code runs fine from 
> my local env which is on java 1.7.But the same producer and consumer fails 
> when executed within the application with the error
> Caused by: java.lang.UnsupportedClassVersionError: JVMCFRE003 bad major 
> version; class=org/apache/kafka/clients/producer/KafkaProducer, offset=6
> Is there someway I can still do it ?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-2651) Remove deprecated config alteration from TopicCommand in 0.9.1.0

2017-05-18 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16015313#comment-16015313
 ] 

Manikumar commented on KAFKA-2651:
--

[~ijuma] We have deprecated the alter config in 0.9.0. current docs have below 
lines. With the new AdminClient, we can migrate the TopicCommand to use 
AdminClient APis.

"Deprecations in 0.9.0.0
Altering topic configuration from the kafka-topics.sh script 
(kafka.admin.TopicCommand) has been deprecated. Going forward, please use the 
kafka-configs.sh script (kafka.admin.ConfigCommand) for this functionality."

> Remove deprecated config alteration from TopicCommand in 0.9.1.0
> 
>
> Key: KAFKA-2651
> URL: https://issues.apache.org/jira/browse/KAFKA-2651
> Project: Kafka
>  Issue Type: Task
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Manikumar
> Fix For: 0.11.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (KAFKA-4454) Authorizer should also include the Principal generated by the PrincipalBuilder.

2017-09-15 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4454.
--
Resolution: Fixed

This is covered in KIP-189/KAFKA-5783

> Authorizer should also include the Principal generated by the 
> PrincipalBuilder.
> ---
>
> Key: KAFKA-4454
> URL: https://issues.apache.org/jira/browse/KAFKA-4454
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.1
>Reporter: Mayuresh Gharat
>Assignee: Mayuresh Gharat
>
> Currently kafka allows users to plugin a custom PrincipalBuilder and a custom 
> Authorizer.
> The Authorizer.authorize() object takes in a Session object that wraps 
> KafkaPrincipal and InetAddress.
> The KafkaPrincipal currently has a PrincipalType and Principal name, which is 
> the name of Principal generated by the PrincipalBuilder. 
> This Principal, generated by the pluggedin PrincipalBuilder might have other 
> fields that might be required by the pluggedin Authorizer but currently we 
> loose this information since we only extract the name of Principal while 
> creating KaflkaPrincipal in SocketServer.  
> It would be great if KafkaPrincipal has an additional field 
> "channelPrincipal" which is used to store the Principal generated by the 
> plugged in PrincipalBuilder.
> The pluggedin Authorizer can then use this "channelPrincipal" to do 
> authorization.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5917) Kafka not starting

2017-09-18 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5917.
--
Resolution: Won't Fix

These kinds of issues can be avoided once we completely move Kafka tools to 
Java Admin API.

> Kafka not starting
> --
>
> Key: KAFKA-5917
> URL: https://issues.apache.org/jira/browse/KAFKA-5917
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 0.10.0.0
>Reporter: Balu
>
> Getting this error in kafka,zookeeper,schema repository cluster.
>  FATAL [Kafka Server 3], Fatal error during KafkaServer startup. Prepare to 
> shutdown (kafka.server.KafkaServer)
> java.lang.IllegalArgumentException: requirement failed
> at scala.Predef$.require(Predef.scala:212)
> at 
> kafka.server.DynamicConfigManager$ConfigChangedNotificationHandler$.processNotification(DynamicConfigManager.scala:90)
> at 
> kafka.common.ZkNodeChangeNotificationListener$$anonfun$kafka$common$ZkNodeChangeNotificationListener$$processNotifications$2$$anonfun$apply$2.apply(ZkNodeChangeNotificationListener.scala:95)
> at 
> kafka.common.ZkNodeChangeNotificationListener$$anonfun$kafka$common$ZkNodeChangeNotificationListener$$processNotifications$2$$anonfun$apply$2.apply(ZkNodeChangeNotificationListener.scala:95)
> at scala.Option.map(Option.scala:146)
> at 
> kafka.common.ZkNodeChangeNotificationListener$$anonfun$kafka$common$ZkNodeChangeNotificationListener$$processNotifications$2.apply(ZkNodeChangeNotificationListener.scala:95)
> at 
> kafka.common.ZkNodeChangeNotificationListener$$anonfun$kafka$common$ZkNodeChangeNotificationListener$$processNotifications$2.apply(ZkNodeChangeNotificationListener.scala:90)
> at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
> at 
> kafka.common.ZkNodeChangeNotificationListener.kafka$common$ZkNodeChangeNotificationListener$$processNotifications(ZkNodeChangeNotificationListener.scala:90)
> at 
> kafka.common.ZkNodeChangeNotificationListener.processAllNotifications(ZkNodeChangeNotificationListener.scala:79)
> at 
> kafka.common.ZkNodeChangeNotificationListener.init(ZkNodeChangeNotificationListener.scala:67)
> at 
> kafka.server.DynamicConfigManager.startup(DynamicConfigManager.scala:122)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:233)
> at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:37)
> at kafka.Kafka$.main(Kafka.scala:67)
> at kafka.Kafka.main(Kafka.scala)
> Please help



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5015) SASL/SCRAM authentication failures are hidden

2017-09-22 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5015.
--
Resolution: Duplicate

Resolving as duplicate of KAFKA-4764

> SASL/SCRAM authentication failures are hidden
> -
>
> Key: KAFKA-5015
> URL: https://issues.apache.org/jira/browse/KAFKA-5015
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.10.2.0
>Reporter: Johan Ström
>
> During experimentation with multiple brokers and SCRAM authentication, the 
> brokers didn't seem to connect properly.
> Apparently the receiving server does not log connection failures (and their 
> cause) unless you enable DEBUG logging on 
> org.apache.kafka.common.network.Selector.
> Expected: that the rejected connections is logged (without stack trace) 
> without having to enable DEBUG. 
> (The root cause of my problem was that I hadn't yet added the user to the 
> Zk-backed SCRAM configuration)
> The controller flooded controller.log with WARNs:
> {code}
> [2017-04-05 15:33:42,850] WARN [Controller-1-to-broker-1-send-thread], 
> Controller 1's connection to broker kafka02:9093 (id: 1 rack: null) was 
> unsuccessful (kafka.controller.RequestSendThread)
> java.io.IOException: Connection to kafka02:9093 (id: 1 rack: null) failed
> {code}
> The peer does not log anything in any log, until debugging was enabled:
> {code}
> [2017-04-05 15:28:58,373] DEBUG Accepted connection from /10.10.0.5:43670 on 
> /10.10.0.6:9093 and assigned it to processor 4, sendBufferSize 
> [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: 
> [102400|102400] (kafka.network.Acceptor)
> [2017-04-05 15:28:58,374] DEBUG Processor 4 listening to new connection from 
> /10.10.0.5:43670 (kafka.network.Processor)
> [2017-04-05 15:28:58,376] DEBUG Set SASL server state to HANDSHAKE_REQUEST 
> (org.apache.kafka.common.security.authenticator.SaslServerAuthenticator)
> [2017-04-05 15:28:58,376] DEBUG Handle Kafka request SASL_HANDSHAKE 
> (org.apache.kafka.common.security.authenticator.SaslServerAuthenticator)
> [2017-04-05 15:28:58,378] DEBUG Using SASL mechanism 'SCRAM-SHA-512' provided 
> by client 
> (org.apache.kafka.common.security.authenticator.SaslServerAuthenticator)
> [2017-04-05 15:28:58,381] DEBUG Setting SASL/SCRAM_SHA_512 server state to 
> RECEIVE_CLIENT_FIRST_MESSAGE 
> (org.apache.kafka.common.security.scram.ScramSaslServer)
> [2017-04-05 15:28:58,381] DEBUG Set SASL server state to AUTHENTICATE 
> (org.apache.kafka.common.security.authenticator.SaslServerAuthenticator)
> [2017-04-05 15:28:58,383] DEBUG Setting SASL/SCRAM_SHA_512 server state to 
> FAILED (org.apache.kafka.common.security.scram.ScramSaslServer)
> [2017-04-05 15:28:58,383] DEBUG Set SASL server state to FAILED 
> (org.apache.kafka.common.security.authenticator.SaslServerAuthenticator)
> [2017-04-05 15:28:58,385] DEBUG Connection with /10.10.0.5 disconnected 
> (org.apache.kafka.common.network.Selector)
> java.io.IOException: javax.security.sasl.SaslException: Authentication 
> failed: Credentials could not be obtained [Caused by 
> javax.security.sasl.SaslException: Authentication failed: Invalid user 
> credentials]
>   at 
> org.apache.kafka.common.security.authenticator.SaslServerAuthenticator.authenticate(SaslServerAuthenticator.java:250)
>   at 
> org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:71)
>   at 
> org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:350)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:303)
>   at kafka.network.Processor.poll(SocketServer.scala:494)
>   at kafka.network.Processor.run(SocketServer.scala:432)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: javax.security.sasl.SaslException: Authentication failed: 
> Credentials could not be obtained [Caused by 
> javax.security.sasl.SaslException: Authentication failed: Invalid user 
> credentials]
>   at 
> org.apache.kafka.common.security.scram.ScramSaslServer.evaluateResponse(ScramSaslServer.java:104)
>   at 
> org.apache.kafka.common.security.authenticator.SaslServerAuthenticator.authenticate(SaslServerAuthenticator.java:235)
>   ... 6 more
> Caused by: javax.security.sasl.SaslException: Authentication failed: Invalid 
> user credentials
>   at 
> org.apache.kafka.common.security.scram.ScramSaslServer.evaluateResponse(ScramSaslServer.java:94)
>   ... 7 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5910) Kafka 0.11.0.1 Kafka consumer/producers retries in infinite loop when wrong SASL creds are passed

2017-09-22 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5910.
--
Resolution: Duplicate

Resolving as duplicate of KAFKA-4764

> Kafka 0.11.0.1 Kafka consumer/producers retries in infinite loop when wrong 
> SASL creds are passed
> -
>
> Key: KAFKA-5910
> URL: https://issues.apache.org/jira/browse/KAFKA-5910
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.11.0.0
>Reporter: Ramkumar
>
> Similar to https://issues.apache.org/jira/browse/KAFKA-4764 , but the status 
> shows patch available but the client wont disconnects after getting the 
> warning.
> Issue 1:
> Publisher flow:
> Kafka publisher goes into infinite loop if the AAF credentials are wrong when 
> authenticating in Kaka broker.
> Detail:
> If the correct user name and password are used at the kafka publisher client 
> side to connect to kafka broker, then it authenticates and authorizes fine.
> If  incorrect username or password is used at the kafka publisher client 
> side, then broker logs shows a continuous (infinite loop)  log showing client 
> is trying to reconnect the broker as it doesn’t get authentication failure 
> exception from broker. 
> JIRA defect in apache:
> https://issues.apache.org/jira/browse/KAFKA-4764
> Can you pls let me know if this issue is resolved in kafka 0.11.0.1 version 
> or still an open issue?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4504) Details of retention.bytes property at Topic level are not clear on how they impact partition size

2017-10-17 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4504.
--
   Resolution: Fixed
 Assignee: Manikumar
Fix Version/s: 1.0.0

> Details of retention.bytes property at Topic level are not clear on how they 
> impact partition size
> --
>
> Key: KAFKA-4504
> URL: https://issues.apache.org/jira/browse/KAFKA-4504
> Project: Kafka
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 0.10.0.1
>Reporter: Justin Manchester
>Assignee: Manikumar
> Fix For: 1.0.0
>
>
> Problem:
> Details of retention.bytes property at Topic level are not clear on how they 
> impact partition size
> Business Impact:
> Users are setting retention.bytes and not seeing the desired store amount of 
> data.
> Current Text:
> This configuration controls the maximum size a log can grow to before we will 
> discard old log segments to free up space if we are using the "delete" 
> retention policy. By default there is no size limit only a time limit.
> Proposed change:
> This configuration controls the maximum size a log can grow to before we will 
> discard old log segments to free up space if we are using the "delete" 
> retention policy. By default there is no size limit only a time limit.  
> Please note, this is calculated as retention.bytes * number of partitions on 
> the given topic for the total  amount of disk space to be used.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4638) Outdated and misleading link in section "End-to-end Batch Compression"

2017-09-08 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4638.
--
Resolution: Duplicate

Resolving as duplicate of  KAFKA-5686. PR Raised for KAFKA-5686

> Outdated and misleading link in section "End-to-end Batch Compression"
> --
>
> Key: KAFKA-4638
> URL: https://issues.apache.org/jira/browse/KAFKA-4638
> Project: Kafka
>  Issue Type: Bug
>  Components: compression
>Affects Versions: 0.10.1.1
> Environment: Documentation
>Reporter: Vincent Tieleman
>Priority: Minor
>
> The section "End-to-end Batch Compression" mentions the following:
> "Kafka supports GZIP, Snappy and LZ4 compression protocols. More details on 
> compression can be found here."
> When looking up the "here" link, 
> https://cwiki.apache.org/confluence/display/KAFKA/Compression, it describes 
> not only the mechanism, but also gives the configuration properties to use. 
> The properties provided are "compression.codec" and "compression.topics", but 
> these are no longer used. Instead, one should use "compression.type", where 
> one can directly specify which compression to use (i.e. currently gzip, 
> snappy or lz4). The link also fails to mention lz4 compression, which is 
> supported by now.
> Not aware that this information is outdated, I spent quite some time before 
> finding out that I was using the wrong properties. I imagine this must happen 
> to more people.
> I suggest adding a small remark to the link, so that people know it is 
> outdated. Furthermore, I would add that the latest configuration properties 
> can be found in the "Producer config" section.
> The alternative would be to simply remove the link all together, as there is 
> too much outdated information in it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5592) Connection with plain client to SSL-secured broker causes OOM

2017-09-08 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5592.
--
Resolution: Duplicate

Marking this as duplicate of KAFKA-4493.

> Connection with plain client to SSL-secured broker causes OOM
> -
>
> Key: KAFKA-5592
> URL: https://issues.apache.org/jira/browse/KAFKA-5592
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.11.0.0
> Environment: Linux x86_64 x86_64 x86_64 GNU/Linux
>Reporter: Marcin Łuczyński
> Attachments: heapdump.20170713.100129.14207.0002.phd, Heap.PNG, 
> javacore.20170713.100129.14207.0003.txt, Snap.20170713.100129.14207.0004.trc, 
> Stack.PNG
>
>
> While testing connection with client app that does not have configured 
> truststore with a Kafka broker secured by SSL, my JVM crashes with 
> OutOfMemoryError. I saw it mixed with StackOverfowError. I attach dump files.
> The stack trace to start with is here:
> {quote}at java/nio/HeapByteBuffer. (HeapByteBuffer.java:57) 
> at java/nio/ByteBuffer.allocate(ByteBuffer.java:331) 
> at 
> org/apache/kafka/common/network/NetworkReceive.readFromReadableChannel(NetworkReceive.java:93)
>  
> at 
> org/apache/kafka/common/network/NetworkReceive.readFrom(NetworkReceive.java:71)
>  
> at 
> org/apache/kafka/common/network/KafkaChannel.receive(KafkaChannel.java:169) 
> at org/apache/kafka/common/network/KafkaChannel.read(KafkaChannel.java:150) 
> at 
> org/apache/kafka/common/network/Selector.pollSelectionKeys(Selector.java:355) 
> at org/apache/kafka/common/network/Selector.poll(Selector.java:303) 
> at org/apache/kafka/clients/NetworkClient.poll(NetworkClient.java:349) 
> at 
> org/apache/kafka/clients/consumer/internals/ConsumerNetworkClient.poll(ConsumerNetworkClient.java:226)
>  
> at 
> org/apache/kafka/clients/consumer/internals/ConsumerNetworkClient.poll(ConsumerNetworkClient.java:188)
>  
> at 
> org/apache/kafka/clients/consumer/internals/AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:207)
>  
> at 
> org/apache/kafka/clients/consumer/internals/AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:193)
>  
> at 
> org/apache/kafka/clients/consumer/internals/ConsumerCoordinator.poll(ConsumerCoordinator.java:279)
>  
> at 
> org/apache/kafka/clients/consumer/KafkaConsumer.pollOnce(KafkaConsumer.java:1029)
>  
> at 
> org/apache/kafka/clients/consumer/KafkaConsumer.poll(KafkaConsumer.java:995) 
> at 
> com/ibm/is/cc/kafka/runtime/KafkaConsumerProcessor.process(KafkaConsumerProcessor.java:237)
>  
> at 
> com/ibm/is/cc/kafka/runtime/KafkaProcessor.process(KafkaProcessor.java:173) 
> at 
> com/ibm/is/cc/javastage/connector/CC_JavaAdapter.run(CC_JavaAdapter.java:443){quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-1037) switching to gzip compression no error message for missing snappy jar on classpath

2017-09-08 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1037.
--
Resolution: Fixed

This has been fixed in newer versions.

> switching to gzip compression no error message for missing snappy jar on 
> classpath
> --
>
> Key: KAFKA-1037
> URL: https://issues.apache.org/jira/browse/KAFKA-1037
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joe Stein
>  Labels: noob
>
> seems to be swallowed by not setting the log4j.properties but shows up when 
> this and setting to DEBUG 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-251) The ConsumerStats MBean's PartOwnerStats attribute is a string

2017-09-08 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-251.
-
Resolution: Won't Fix

Closing inactive issue as per above comments.

> The ConsumerStats MBean's PartOwnerStats  attribute is a string
> ---
>
> Key: KAFKA-251
> URL: https://issues.apache.org/jira/browse/KAFKA-251
> Project: Kafka
>  Issue Type: Bug
>Reporter: Pierre-Yves Ritschard
> Attachments: 0001-Incorporate-Jun-Rao-s-comments-on-KAFKA-251.patch, 
> 0001-Provide-a-patch-for-KAFKA-251.patch
>
>
> The fact that the PartOwnerStats is a string prevents monitoring systems from 
> graphing consumer lag. There should be one mbean per [ topic, partition, 
> groupid ] group.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-1608) Windows: Error: Could not find or load main class org.apache.zookeeper.server.quorum.QuorumPeerMain

2017-09-08 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1608.
--
Resolution: Fixed

 This was fixed in newer versions. Pl reopen if you think the issue still exists


> Windows: Error: Could not find or load main class 
> org.apache.zookeeper.server.quorum.QuorumPeerMain
> ---
>
> Key: KAFKA-1608
> URL: https://issues.apache.org/jira/browse/KAFKA-1608
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.1.1
> Environment: Windows
>Reporter: Rakesh Komulwad
>Priority: Minor
>  Labels: windows
>
> When trying to start zookeeper getting the following error in Windows
> Error: Could not find or load main class 
> org.apache.zookeeper.server.quorum.QuorumPeerMain
> Fix for this is to edit windows\kafka-run-class.bat
> Change
> set BASE_DIR=%CD%\..
> to
> set BASE_DIR=%CD%\..\..
> Change
> for %%i in (%BASE_DIR%\core\lib\*.jar)
> to
> for %%i in (%BASE_DIR%\libs\*.jar)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2319) After controlled shutdown: IllegalStateException: Kafka scheduler has not been started

2017-09-08 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2319.
--
Resolution: Fixed

This was fixed in newer versions.  Pl reopen if you think the issue still exists


> After controlled shutdown: IllegalStateException: Kafka scheduler has not 
> been started
> --
>
> Key: KAFKA-2319
> URL: https://issues.apache.org/jira/browse/KAFKA-2319
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Rosenberg
>
> Running 0.8.2.1, just saw this today at the end of a controlled shutdown.  It 
> doesn't happen every time, but I've seen it several times:
> {code}
> 2015-07-07 18:54:28,424  INFO [Thread-4] server.KafkaServer - [Kafka Server 
> 99], Controlled shutdown succeeded
> 2015-07-07 18:54:28,425  INFO [Thread-4] network.SocketServer - [Socket 
> Server on Broker 99], Shutting down
> 2015-07-07 18:54:28,435  INFO [Thread-4] network.SocketServer - [Socket 
> Server on Broker 99], Shutdown completed
> 2015-07-07 18:54:28,435  INFO [Thread-4] server.KafkaRequestHandlerPool - 
> [Kafka Request Handler on Broker 99], shutting down
> 2015-07-07 18:54:28,444  INFO [Thread-4] server.KafkaRequestHandlerPool - 
> [Kafka Request Handler on Broker 99], shut down completely
> 2015-07-07 18:54:28,649  INFO [Thread-4] server.ReplicaManager - [Replica 
> Manager on Broker 99]: Shut down
> 2015-07-07 18:54:28,649  INFO [Thread-4] server.ReplicaFetcherManager - 
> [ReplicaFetcherManager on broker 99] shutting down
> 2015-07-07 18:54:28,650  INFO [Thread-4] server.ReplicaFetcherThread - 
> [ReplicaFetcherThread-0-95], Shutting down
> 2015-07-07 18:54:28,750  INFO [Thread-4] server.ReplicaFetcherThread - 
> [ReplicaFetcherThread-0-95], Shutdown completed
> 2015-07-07 18:54:28,750  INFO [ReplicaFetcherThread-0-95] 
> server.ReplicaFetcherThread - [ReplicaFetcherThread-0-95], Stopped
> 2015-07-07 18:54:28,750  INFO [Thread-4] server.ReplicaFetcherThread - 
> [ReplicaFetcherThread-0-98], Shutting down
> 2015-07-07 18:54:28,791  INFO [Thread-4] server.ReplicaFetcherThread - 
> [ReplicaFetcherThread-0-98], Shutdown completed
> 2015-07-07 18:54:28,791  INFO [ReplicaFetcherThread-0-98] 
> server.ReplicaFetcherThread - [ReplicaFetcherThread-0-98], Stopped
> 2015-07-07 18:54:28,791  INFO [Thread-4] server.ReplicaFetcherManager - 
> [ReplicaFetcherManager on broker 99] shutdown completed
> 2015-07-07 18:54:28,819  INFO [Thread-4] server.ReplicaManager - [Replica 
> Manager on Broker 99]: Shut down completely
> 2015-07-07 18:54:28,826  INFO [Thread-4] log.LogManager - Shutting down.
> 2015-07-07 18:54:30,459  INFO [Thread-4] log.LogManager - Shutdown complete.
> 2015-07-07 18:54:30,463  WARN [Thread-4] utils.Utils$ - Kafka scheduler has 
> not been started
> java.lang.IllegalStateException: Kafka scheduler has not been started
> at kafka.utils.KafkaScheduler.ensureStarted(KafkaScheduler.scala:114)
> at kafka.utils.KafkaScheduler.shutdown(KafkaScheduler.scala:86)
> at 
> kafka.controller.KafkaController.onControllerResignation(KafkaController.scala:350)
> at 
> kafka.controller.KafkaController.shutdown(KafkaController.scala:664)
> at 
> kafka.server.KafkaServer$$anonfun$shutdown$8.apply$mcV$sp(KafkaServer.scala:285)
> at kafka.utils.Utils$.swallow(Utils.scala:172)
> at kafka.utils.Logging$class.swallowWarn(Logging.scala:92)
> at kafka.utils.Utils$.swallowWarn(Utils.scala:45)
> at kafka.utils.Logging$class.swallow(Logging.scala:94)
> at kafka.utils.Utils$.swallow(Utils.scala:45)
> at kafka.server.KafkaServer.shutdown(KafkaServer.scala:285)
>  ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-1817) AdminUtils.createTopic vs kafka-topics.sh --create with partitions

2017-09-08 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1817.
--
Resolution: Fixed

Fixed in  KAFKA-1737

> AdminUtils.createTopic vs kafka-topics.sh --create with partitions
> --
>
> Key: KAFKA-1817
> URL: https://issues.apache.org/jira/browse/KAFKA-1817
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.8.2.0
> Environment: debian linux current version  up to date
>Reporter: Jason Kania
>
> When topics are created using AdminUtils.createTopic in code, no partitions 
> folder is created The zookeeper shell shows this.
> ls /brokers/topics/foshizzle
> []
> However, when kafka-topics.sh --create is run, the partitions folder is 
> created:
> ls /brokers/topics/foshizzle
> [partitions]
> The unfortunately useless error message "KeeperErrorCode = NoNode for 
> /brokers/topics/periodicReading/partitions" makes it unclear what to do. When 
> the topics are listed via kafka-topics.sh, they appear to have been created 
> fine. It would be good if the exception was wrapped by Kafka to suggested 
> looking in the zookeeper shell so a person didn't have to dig around to 
> understand what the meaning of this path is...



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2392) Kafka Server does not accept 0 as a port

2017-09-08 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2392.
--
Resolution: Fixed

Kafka supports port 0 for unit tests. As mentioned in the previous comment,  
Curator's TestingServer to run an in-memory Zookeeper,

> Kafka Server does not accept 0 as a port
> 
>
> Key: KAFKA-2392
> URL: https://issues.apache.org/jira/browse/KAFKA-2392
> Project: Kafka
>  Issue Type: Bug
>  Components: config
>Affects Versions: 0.8.2.1
>Reporter: Buğra Gedik
>Priority: Minor
>
> I want to specify 0 as a port number for Zookeeper as well as Kafka Server. 
> For instance server.properties configuration file has a 'port' property, but 
> does not accept 0 as a value. Similarly, zookeeper.properties has a 
> 'clientPort' property, but does not accept 0 as a value.
> I want 0 to specify that the port will be selected automatically (OS 
> assigned). In my use case, I want to run Zookeeper with an automatically 
> picked port, and use that port to create a Kafka Server configuration file, 
> that specifies the Kafka Server port as 0 as well. I parse the output from 
> the servers to figure out the actual ports used. All this is needed for a 
> testing environment.
> Not supporting automatically selected ports makes it difficult to run Kafka 
> server as part of our tests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4634) Issue of one kafka brokers not listed in zookeeper

2017-08-29 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4634.
--
Resolution: Fixed

Similar issue fixed in KAFKA-1387/Newer versions. Pl reopen if you think the 
issue still exists


> Issue of one kafka brokers not listed in zookeeper
> --
>
> Key: KAFKA-4634
> URL: https://issues.apache.org/jira/browse/KAFKA-4634
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.1
> Environment: Zookeeper version: 3.4.6-1569965, built on 02/20/2014 
> 09:09 GMT
> kafka_2.10-0.8.2.1
>Reporter: Maharajan Shunmuga Sundaram
>
> Hi,
> We have incident that one of the 10 brokers not listed in brokers list of 
> zookeeper.
> This is verified by running following command
> >> echo dump | nc cz2 2181
> SessionTracker dump:
> Session Sets (4):
> 0 expire at Fri Jan 13 22:32:14 EST 2017:
> 0 expire at Fri Jan 13 22:32:16 EST 2017:
> 7 expire at Fri Jan 13 22:32:18 EST 2017:
> 0x259968e41e3
> 0x35996670d5d0001
> 0x35996670d5d
> 0x159966708470004
> 0x159966e4776
> 0x159966708470003
> 0x2599672df26
> 3 expire at Fri Jan 13 22:32:20 EST 2017:
> 0x159968e41dd
> 0x259966708550001
> 0x25996670855
> ephemeral nodes dump:
> Sessions with Ephemerals (9):
> 0x25996670855:
> /brokers/ids/112
> 0x259968e41e3:
> /brokers/ids/213
> 0x159968e41dd:
> /brokers/ids/19
> 0x159966708470003:
> /brokers/ids/110
> 0x35996670d5d:
> /brokers/ids/113
> /controller
> 0x259966708550001:
> /brokers/ids/111
> 0x159966708470004:
> /brokers/ids/212
> 0x2599672df26:
> /brokers/ids/29
> 0x35996670d5d0001:
> /brokers/ids/210
> --
> There are 10 sessions, but only 9 sessions are listed with brokers.
> Broker with id 211 is not listed. Session 0x159966e4776 is not shown with 
> broker id 211.
> In the broker side log, I do see it is connected
> >> zgrep "0x159966e4776" *log*
>  
> zk.log:[2017-01-13 01:05:28,513] INFO Session establishment complete on 
> server cz1/10.254.2.19:2181, sessionid = 0x159966e4776, negotiated 
> timeout = 6000 (org.apache.zookeeper.ClientCnxn)
> zk.log:[2017-01-13 01:35:38,163] INFO Unable to read additional data from 
> server sessionid 0x159966e4776, likely server has closed socket, closing 
> socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
> zk.log:[2017-01-13 01:35:39,101] WARN Session 0x159966e4776 for server 
> null, unexpected error, closing socket connection and attempting reconnect 
> (org.apache.zookeeper.ClientCnxn)
> zk.log:[2017-01-13 01:35:40,121] INFO Unable to read additional data from 
> server sessionid 0x159966e4776, likely server has closed socket, closing 
> socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
> zk.log:[2017-01-13 01:35:41,770] WARN Session 0x159966e4776 for server 
> null, unexpected error, closing socket connection and attempting reconnect 
> (org.apache.zookeeper.ClientCnxn)
> zk.log:[2017-01-13 01:35:42,439] WARN Session 0x159966e4776 for server 
> null, unexpected error, closing socket connection and attempting reconnect 
> (org.apache.zookeeper.ClientCnxn)
> zk.log:[2017-01-13 01:35:43,235] INFO Unable to read additional data from 
> server sessionid 0x159966e4776, likely server has closed socket, closing 
> socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
> zk.log:[2017-01-13 01:35:44,950] WARN Session 0x159966e4776 for server 
> null, unexpected error, closing socket connection and attempting reconnect 
> (org.apache.zookeeper.ClientCnxn)
> zk.log:[2017-01-13 01:35:45,837] INFO Unable to read additional data from 
> server sessionid 0x159966e4776, likely server has closed socket, closing 
> socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
> .
> .
> .
> .
> zk.log:[2017-01-13 01:40:14,818] INFO Unable to read additional data from 
> server sessionid 0x159966e4776, likely server has closed socket, closing 
> socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
> zk.log:[2017-01-13 01:40:15,916] WARN Session 0x159966e4776 for server 
> null, unexpected error, closing socket connection and attempting reconnect 
> (org.apache.zookeeper.ClientCnxn)
> zk.log:[2017-01-13 01:40:19,692] INFO Client session timed out, have not 
> heard from server in 3676ms for sessionid 0x159966e4776, closing socket 
> connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
> zk.log:[2017-01-13 01:40:20,632] INFO Unable to read additional data from 
> server sessionid 0x159966e4776, likely server has closed socket, closing 
> 

[jira] [Resolved] (KAFKA-2173) Kafka died after throw more error

2017-08-29 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2173.
--
Resolution: Cannot Reproduce

 Might have fixed in latest versions. Pl reopen if you think the issue still 
exists


> Kafka died after throw more error
> -
>
> Key: KAFKA-2173
> URL: https://issues.apache.org/jira/browse/KAFKA-2173
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.1
> Environment: VPS Server CentOs 6.6 4G Ram
>Reporter: Truyet Nguyen
>
> Kafka is died after server.log throw more error: 
> [2015-05-05 16:08:34,616] ERROR Closing socket for /127.0.0.1 because of 
> error (kafka.network.Processor)
> java.io.IOException: Broken pipe
>   at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
>   at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
>   at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
>   at sun.nio.ch.IOUtil.write(IOUtil.java:65)
>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:470)
>   at kafka.api.TopicDataSend.writeTo(FetchResponse.scala:123)
>   at kafka.network.MultiSend.writeTo(Transmission.scala:101)
>   at kafka.api.FetchResponseSend.writeTo(FetchResponse.scala:231)
>   at kafka.network.Processor.write(SocketServer.scala:472)
>   at kafka.network.Processor.run(SocketServer.scala:342)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-1636) High CPU in very active environment

2017-08-29 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1636.
--
Resolution: Won't Fix

ConsumerIterator waits for the data from the underlying stream. Pl reopen if 
you think the issue still exists


> High CPU in very active environment
> ---
>
> Key: KAFKA-1636
> URL: https://issues.apache.org/jira/browse/KAFKA-1636
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.8.0
> Environment: Redhat 64 bit
>  2.6.32-431.23.3.el6.x86_64 #1 SMP Wed Jul 16 06:12:23 EDT 2014 x86_64 x86_64 
> x86_64 GNU/Linux
>Reporter: Laurie Turner
>Assignee: Neha Narkhede
>
> Found the same issue on StackOverFlow below:
> http://stackoverflow.com/questions/22983435/kafka-consumer-threads-are-in-waiting-state-and-lag-is-getting-increased
> This is a very busy environment and the majority of the CPU seems to be busy 
> in the in the await method. 
> 3XMTHREADINFO3   Java callstack:
> 4XESTACKTRACEat sun/misc/Unsafe.park(Native Method)
> 4XESTACKTRACEat 
> java/util/concurrent/locks/LockSupport.parkNanos(LockSupport.java:237(Compiled
>  Code))
> 4XESTACKTRACEat 
> java/util/concurrent/locks/AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2093(Compiled
>  Code))
> 4XESTACKTRACEat 
> java/util/concurrent/LinkedBlockingQueue.poll(LinkedBlockingQueue.java:478(Compiled
>  Code))
> 4XESTACKTRACEat 
> kafka/consumer/ConsumerIterator.makeNext(ConsumerIterator.scala:65(Compiled 
> Code))
> 4XESTACKTRACEat 
> kafka/consumer/ConsumerIterator.makeNext(ConsumerIterator.scala:33(Compiled 
> Code))
> 4XESTACKTRACEat 
> kafka/utils/IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:61(Compiled
>  Code))
> 4XESTACKTRACEat 
> kafka/utils/IteratorTemplate.hasNext(IteratorTemplate.scala:53(Compiled Code))



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-888) problems when shutting down the java consumer .

2017-08-30 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-888.
-
Resolution: Cannot Reproduce

 Pl reopen if you think the issue still exists


> problems when shutting down the java consumer .
> ---
>
> Key: KAFKA-888
> URL: https://issues.apache.org/jira/browse/KAFKA-888
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.8.0
> Environment: Linux kacper-pc 3.2.0-40-generic #64-Ubuntu SMP 2013 
> x86_64 x86_64 x86_64 GNU/Linux, scala 2.9.2 
>Reporter: kacper chwialkowski
>Assignee: Neha Narkhede
>Priority: Minor
>  Labels: bug, consumer, exception
>
> I got the following error when shutting down the consumer :
> ConsumerFetcherThread-test-consumer-group_kacper-pc-1367268338957-12cb5a0b-0-0]
>  INFO  kafka.consumer.SimpleConsumer - Reconnect due to socket error: 
> java.nio.channels.ClosedByInterruptException: null
>   at 
> java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
>  ~[na:1.7.0_21]
>   at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:386) 
> ~[na:1.7.0_21]
>   at 
> sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:220) 
> ~[na:1.7.0_21]
>   at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:103) 
> ~[na:1.7.0_21]
>   at 
> java.nio.channels.Channels$ReadableByteChannelImpl.read(Channels.java:385) 
> ~[na:1.7.0_21]
>   at kafka.utils.Utils$.read(Utils.scala:394) 
> ~[kafka_2.9.2-0.8.0-SNAPSHOT.jar:0.8.0-SNAPSHOT]
>   at 
> kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:54)
>  ~[kafka_2.9.2-0.8.0-SNAPSHOT.jar:0.8.0-SNAPSHOT]
>   at kafka.network.Receive$class.readCompletely(Transmission.scala:56) 
> ~[kafka_2.9.2-0.8.0-SNAPSHOT.jar:0.8.0-SNAPSHOT]
>   at 
> kafka.network.BoundedByteBufferReceive.readCompletely(BoundedByteBufferReceive.scala:29)
>  ~[kafka_2.9.2-0.8.0-SNAPSHOT.jar:0.8.0-SNAPSHOT]
>   at kafka.network.BlockingChannel.receive(BlockingChannel.scala:100) 
> ~[kafka_2.9.2-0.8.0-SNAPSHOT.jar:0.8.0-SNAPSHOT]
>   at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:75) 
> [kafka_2.9.2-0.8.0-SNAPSHOT.jar:0.8.0-SNAPSHOT]
>   at 
> kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:73)
>  [kafka_2.9.2-0.8.0-SNAPSHOT.jar:0.8.0-SNAPSHOT]
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SimpleConsumer.scala:112)
>  [kafka_2.9.2-0.8.0-SNAPSHOT.jar:0.8.0-SNAPSHOT]
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112)
>  [kafka_2.9.2-0.8.0-SNAPSHOT.jar:0.8.0-SNAPSHOT]
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112)
>  [kafka_2.9.2-0.8.0-SNAPSHOT.jar:0.8.0-SNAPSHOT]
>   at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33) 
> [kafka_2.9.2-0.8.0-SNAPSHOT.jar:0.8.0-SNAPSHOT]
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply$mcV$sp(SimpleConsumer.scala:111)
>  [kafka_2.9.2-0.8.0-SNAPSHOT.jar:0.8.0-SNAPSHOT]
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111)
>  [kafka_2.9.2-0.8.0-SNAPSHOT.jar:0.8.0-SNAPSHOT]
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111)
>  [kafka_2.9.2-0.8.0-SNAPSHOT.jar:0.8.0-SNAPSHOT]
>   at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33) 
> [kafka_2.9.2-0.8.0-SNAPSHOT.jar:0.8.0-SNAPSHOT]
>   at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:110) 
> [kafka_2.9.2-0.8.0-SNAPSHOT.jar:0.8.0-SNAPSHOT]
>   at 
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:96)
>  [kafka_2.9.2-0.8.0-SNAPSHOT.jar:0.8.0-SNAPSHOT]
>   at 
> kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:88) 
> [kafka_2.9.2-0.8.0-SNAPSHOT.jar:0.8.0-SNAPSHOT]
>   at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:51) 
> [kafka_2.9.2-0.8.0-SNAPSHOT.jar:0.8.0-SNAPSHOT]
> and this is how I create my Consumer 
>   public Boolean call() throws Exception {
> Map topicCountMap = new HashMap<>();
> topicCountMap.put(topic, new Integer(1));
> Map>> consumerMap = 
> consumer.createMessageStreams(topicCountMap);
> KafkaStream stream = 
> consumerMap.get(topic).get(0);
> ConsumerIterator it = stream.iterator();
> it.next();
> LOGGER.info("Received the message. Shutting down");
> consumer.commitOffsets();
> 

[jira] [Resolved] (KAFKA-1632) No such method error on KafkaStream.head

2017-08-30 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1632.
--
Resolution: Cannot Reproduce

 Mostly related to Kafka version mismatch. Pl reopen if you think the issue 
still exists


> No such method error on KafkaStream.head
> 
>
> Key: KAFKA-1632
> URL: https://issues.apache.org/jira/browse/KAFKA-1632
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.0
>Reporter: aarti gupta
>
> \The following error is thrown, (when I call KafkaStream.head(), as shown in 
> the code snippet below)
>  WARN -  java.lang.NoSuchMethodError: 
> kafka.consumer.KafkaStream.head()Lkafka/message/MessageAndMetadata;
> My use case, is that I want to block on the receive() method, and when a 
> message is published on the topic, I 'return the head' of the queue to the 
> calling method, that processes it.
> I do not use partitioning and have a single stream.
> import com.google.common.collect.Maps;
> import x.x.x.Task;
> import kafka.consumer.ConsumerConfig;
> import kafka.consumer.KafkaStream;
> import kafka.javaapi.consumer.ConsumerConnector;
> import kafka.javaapi.consumer.ZookeeperConsumerConnector;
> import kafka.message.MessageAndMetadata;
> import org.codehaus.jettison.json.JSONException;
> import org.codehaus.jettison.json.JSONObject;
> import org.slf4j.Logger;
> import org.slf4j.LoggerFactory;
> import java.io.IOException;
> import java.util.List;
> import java.util.Map;
> import java.util.Properties;
> import java.util.concurrent.Callable;
> import java.util.concurrent.Executors;
> /**
>  * @author agupta
>  */
> public class KafkaConsumerDelegate implements ConsumerDelegate {
> private ConsumerConnector consumerConnector;
> private String topicName;
> private static Logger LOG = 
> LoggerFactory.getLogger(KafkaProducerDelegate.class.getName());
> private final Map topicCount = Maps.newHashMap();
> private Map>> messageStreams;
> private List> kafkaStreams;
> @Override
> public Task receive(final boolean consumerConfirms) {
> try {
> LOG.info("Kafka consumer delegate listening on topic " + 
> getTopicName());
> kafkaStreams = messageStreams.get(getTopicName());
> final KafkaStream kafkaStream = 
> kafkaStreams.get(0);
> return Executors.newSingleThreadExecutor().submit(new 
> Callable() {
> @Override
> public Task call() throws Exception {
>  final MessageAndMetadata 
> messageAndMetadata= kafkaStream.head();
> final Task message = new Task() {
> @Override
> public byte[] getBytes() {
> return messageAndMetadata.message();
> }
> };
> return message;
> }
> }).get();
> } catch (Exception e) {
> LOG.warn("Error in consumer " + e.getMessage());
> }
> return null;
> }
> @Override
> public void initialize(JSONObject configData, boolean publisherAckMode) 
> throws IOException {
> try {
> this.topicName = configData.getString("topicName");
> LOG.info("Topic name is " + topicName);
> } catch (JSONException e) {
> e.printStackTrace();
> LOG.error("Error parsing configuration", e);
> }
> Properties properties = new Properties();
> properties.put("zookeeper.connect", "localhost:2181");
> properties.put("group.id", "testgroup");
> ConsumerConfig consumerConfig = new ConsumerConfig(properties);
> //only one stream, and one topic, (Since we are not supporting 
> partitioning)
> topicCount.put(getTopicName(), 1);
> consumerConnector = new ZookeeperConsumerConnector(consumerConfig);
> messageStreams = consumerConnector.createMessageStreams(topicCount);
> }
> @Override
> public void stop() throws IOException {
> //TODO
> throw new UnsupportedOperationException("Method Not Implemented");
> }
> public String getTopicName() {
> return this.topicName;
> }
> }
> Lastly, I am using the following binary 
> kafka_2.8.0-0.8.1.1  
> and the following maven dependency
>   
> org.apache.kafka
> kafka_2.10
> 0.8.1.1
> 
> Any suggestions?
> Thanks
> aarti



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Reopened] (KAFKA-1980) Console consumer throws OutOfMemoryError with large max-messages

2017-08-30 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reopened KAFKA-1980:
--

> Console consumer throws OutOfMemoryError with large max-messages
> 
>
> Key: KAFKA-1980
> URL: https://issues.apache.org/jira/browse/KAFKA-1980
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.8.1.1, 0.8.2.0
>Reporter: Håkon Hitland
>Priority: Minor
> Attachments: kafka-1980.patch
>
>
> Tested on kafka_2.11-0.8.2.0
> Steps to reproduce:
> - Have any topic with at least 1 GB of data.
> - Use kafka-console-consumer.sh on the topic passing a large number to 
> --max-messages, e.g.:
> $ bin/kafka-console-consumer.sh --zookeeper localhost --topic test.large 
> --from-beginning --max-messages  | head -n 40
> Expected result:
> Should stream messages up to max-messages
> Result:
> Out of memory error:
> [2015-02-23 19:41:35,006] ERROR OOME with size 1048618 
> (kafka.network.BoundedByteBufferReceive)
> java.lang.OutOfMemoryError: Java heap space
>   at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
>   at java.nio.ByteBuffer.allocate(ByteBuffer.java:331)
>   at 
> kafka.network.BoundedByteBufferReceive.byteBufferAllocate(BoundedByteBufferReceive.scala:80)
>   at 
> kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:63)
>   at kafka.network.Receive$class.readCompletely(Transmission.scala:56)
>   at 
> kafka.network.BoundedByteBufferReceive.readCompletely(BoundedByteBufferReceive.scala:29)
>   at kafka.network.BlockingChannel.receive(BlockingChannel.scala:111)
>   at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:71)
>   at 
> kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:68)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SimpleConsumer.scala:112)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112)
>   at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply$mcV$sp(SimpleConsumer.scala:111)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111)
>   at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
>   at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:110)
>   at 
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:94)
>   at 
> kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:86)
>   at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
> As a first guess I'd say that this is caused by slice() taking more memory 
> than expected. Perhaps because it is called on an Iterable and not an 
> Iterator?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-1463) producer fails with scala.tuple error

2017-08-30 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1463.
--
Resolution: Won't Fix

 Pl reopen if you think the issue still exists


> producer fails with scala.tuple error
> -
>
> Key: KAFKA-1463
> URL: https://issues.apache.org/jira/browse/KAFKA-1463
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.0
> Environment: java, springsource
>Reporter: Joe
>Assignee: Jun Rao
>
> Running on a windows machine trying to debug first kafka program. The program 
> fails on the following line:
> producer = new kafka.javaapi.producer.Producer(
>   new ProducerConfig(props)); 
> ERROR:
> Exception in thread "main" java.lang.VerifyError: class scala.Tuple2$mcLL$sp 
> overrides final method _1.()Ljava/lang/Object;
>   at java.lang.ClassLoader.defineClass1(Native Method)
>   at java.lang.ClassLoader.defineClass(ClassLoader.java:791)
>   at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)...
> unable to find solution online.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-1980) Console consumer throws OutOfMemoryError with large max-messages

2017-08-30 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1980.
--
Resolution: Won't Fix

[~ndimiduk] Agree. Updated the JIRA.

> Console consumer throws OutOfMemoryError with large max-messages
> 
>
> Key: KAFKA-1980
> URL: https://issues.apache.org/jira/browse/KAFKA-1980
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.8.1.1, 0.8.2.0
>Reporter: Håkon Hitland
>Priority: Minor
> Attachments: kafka-1980.patch
>
>
> Tested on kafka_2.11-0.8.2.0
> Steps to reproduce:
> - Have any topic with at least 1 GB of data.
> - Use kafka-console-consumer.sh on the topic passing a large number to 
> --max-messages, e.g.:
> $ bin/kafka-console-consumer.sh --zookeeper localhost --topic test.large 
> --from-beginning --max-messages  | head -n 40
> Expected result:
> Should stream messages up to max-messages
> Result:
> Out of memory error:
> [2015-02-23 19:41:35,006] ERROR OOME with size 1048618 
> (kafka.network.BoundedByteBufferReceive)
> java.lang.OutOfMemoryError: Java heap space
>   at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
>   at java.nio.ByteBuffer.allocate(ByteBuffer.java:331)
>   at 
> kafka.network.BoundedByteBufferReceive.byteBufferAllocate(BoundedByteBufferReceive.scala:80)
>   at 
> kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:63)
>   at kafka.network.Receive$class.readCompletely(Transmission.scala:56)
>   at 
> kafka.network.BoundedByteBufferReceive.readCompletely(BoundedByteBufferReceive.scala:29)
>   at kafka.network.BlockingChannel.receive(BlockingChannel.scala:111)
>   at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:71)
>   at 
> kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:68)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SimpleConsumer.scala:112)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112)
>   at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply$mcV$sp(SimpleConsumer.scala:111)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111)
>   at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
>   at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:110)
>   at 
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:94)
>   at 
> kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:86)
>   at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
> As a first guess I'd say that this is caused by slice() taking more memory 
> than expected. Perhaps because it is called on an Iterable and not an 
> Iterator?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4297) Cannot Stop Kafka with Shell Script

2017-08-30 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4297.
--
Resolution: Duplicate

Closing this as there is a latest PR for KAFKA-4931.

> Cannot Stop Kafka with Shell Script
> ---
>
> Key: KAFKA-4297
> URL: https://issues.apache.org/jira/browse/KAFKA-4297
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.1
> Environment: CentOS 6.7
>Reporter: Mabin Jeong
>Assignee: Tom Bentley
>Priority: Critical
>  Labels: easyfix
>
> If Kafka's homepath is long, kafka cannot stop with 'kafka-server-stop.sh'.
> That command showed this message:
> ```
> No kafka server to stop
> ```
> This bug is caused that command line is too long like this.
> ```
> /home/bbdev/Amasser/etc/alternatives/jre/bin/java -Xms1G -Xmx5G -server 
> -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 
> -XX:+DisableExplicitGC -Djava.awt.headless=true 
> -Xloggc:/home/bbdev/Amasser/var/log/kafka/kafkaServer-gc.log -verbose:gc 
> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
> -Dcom.sun.management.jmxremote 
> -Dcom.sun.management.jmxremote.authenticate=false 
> -Dcom.sun.management.jmxremote.ssl=false 
> -Dkafka.logs.dir=/home/bbdev/Amasser/var/log/kafka 
> -Dlog4j.configuration=file:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../config/log4j.properties
>  -cp 
> 

[jira] [Resolved] (KAFKA-4389) kafka-server.stop.sh not work

2017-08-30 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4389.
--
Resolution: Duplicate

> kafka-server.stop.sh not work
> -
>
> Key: KAFKA-4389
> URL: https://issues.apache.org/jira/browse/KAFKA-4389
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.10.1.0
> Environment: centos7
>Reporter: JianwenSun
>
> Ths proc/pid/cmdline is 4096 bytes limit, so ps ax | grep 'kafka/.kafka' do 
> not work.  I also don't want to use jsp.  Any other ways? 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-270) sync producer / consumer test producing lot of kafka server exceptions & not getting the throughput mentioned here http://incubator.apache.org/kafka/performance.html

2017-08-30 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-270.
-
Resolution: Won't Fix

Closing due to inactivity. Pl reopen if you think the issue still exists


>  sync producer / consumer test producing lot of kafka server exceptions & not 
> getting the throughput mentioned here 
> http://incubator.apache.org/kafka/performance.html
> --
>
> Key: KAFKA-270
> URL: https://issues.apache.org/jira/browse/KAFKA-270
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, core
>Affects Versions: 0.7
> Environment: Linux 2.6.18-238.1.1.el5 , x86_64 x86_64 x86_64 
> GNU/Linux 
> ext3 file system with raid10
>Reporter: Praveen Ramachandra
>  Labels: clients, core, newdev, performance
>
> I am getting ridiculously low producer and consumer throughput.
> I am using default config values for producer, consumer and broker
> which are very good starting points, as they should yield sufficient
> throughput.
> Appreciate if you point what settings/changes-in-code needs to be done
> to get higher throughput.
> I changed num.partitions in the server.config to 10.
> Please look below for exception and error messages from the server
> BTW: I am running server, zookeeper, producer, consumer on the same host.
> Consumer Code=
>long startTime = System.currentTimeMillis();
>long endTime = startTime + runDuration*1000l;
>Properties props = new Properties();
>props.put("zk.connect", "localhost:2181");
>props.put("groupid", subscriptionName); // to support multiple
> subscribers
>props.put("zk.sessiontimeout.ms", "400");
>props.put("zk.synctime.ms", "200");
>props.put("autocommit.interval.ms", "1000");
>consConfig =  new ConsumerConfig(props);
>consumer =
> kafka.consumer.Consumer.createJavaConsumerConnector(consConfig);
>Map topicCountMap = new HashMap();
>topicCountMap.put(topicName, new Integer(1)); // has the topic
> to which to subscribe to
>Map> consumerMap =
> consumer.createMessageStreams(topicCountMap);
>KafkaMessageStream stream =  
> consumerMap.get(topicName).get(0);
>ConsumerIterator it = stream.iterator();
>while(System.currentTimeMillis() <= endTime )
>{
>it.next(); // discard data
>consumeMsgCount.incrementAndGet();
>}
> End consumer CODE
> =Producer CODE
>props.put("serializer.class", "kafka.serializer.StringEncoder");
>props.put("zk.connect", "localhost:2181");
>// Use random partitioner. Don't need the key type. Just
> set it to Integer.
>// The message is of type String.
>producer = new kafka.javaapi.producer.Producer String>(new ProducerConfig(props));
>long endTime = startTime + runDuration*1000l; // run duration
> is in seconds
>while(System.currentTimeMillis() <= endTime )
>{
>String msg =
> org.apache.commons.lang.RandomStringUtils.random(msgSizes.get(0));
>producer.send(new ProducerData(topicName, msg));
>pc.incrementAndGet();
>}
>java.util.Date date = new java.util.Date(System.currentTimeMillis());
>System.out.println(date+" :: stopped producer for topic"+topicName);
> =END Producer CODE
> I see a bunch of exceptions like this
> [2012-02-11 02:44:11,945] ERROR Closing socket for /188.125.88.145 because of 
> error (kafka.network.Processor)
> java.io.IOException: Connection reset by peer
>   at sun.nio.ch.FileChannelImpl.transferTo0(Native Method)
>   at 
> sun.nio.ch.FileChannelImpl.transferToDirectly(FileChannelImpl.java:405)
>   at sun.nio.ch.FileChannelImpl.transferTo(FileChannelImpl.java:506)
>   at kafka.message.FileMessageSet.writeTo(FileMessageSet.scala:107)
>   at kafka.server.MessageSetSend.writeTo(MessageSetSend.scala:51)
>   at kafka.network.MultiSend.writeTo(Transmission.scala:95)
>   at kafka.network.Processor.write(SocketServer.scala:332)
>   at kafka.network.Processor.run(SocketServer.scala:209)
>   at java.lang.Thread.run(Thread.java:662)
> java.io.IOException: Connection reset by peer
>   at sun.nio.ch.FileDispatcher.read0(Native Method)
>   at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:21)
>   at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:198)
>   at sun.nio.ch.IOUtil.read(IOUtil.java:171)
>   at 

[jira] [Resolved] (KAFKA-1492) Getting error when sending producer request at the broker end with a single broker

2017-08-30 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1492.
--
Resolution: Cannot Reproduce

 Pl reopen if you think the issue still exists


> Getting error when sending producer request at the broker end with a single 
> broker
> --
>
> Key: KAFKA-1492
> URL: https://issues.apache.org/jira/browse/KAFKA-1492
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.1.1
>Reporter: sriram
>Assignee: Jun Rao
>
> Tried to run a simple example by sending a message to a single broker . 
> Getting error 
> [2014-06-13 08:35:45,402] INFO Closing socket connection to /127.0.0.1. 
> (kafka.network.Processor)
> [2014-06-13 08:35:45,440] WARN [KafkaApi-1] Produce request with correlation 
> id 2 from client  on partition [samsung,0] failed due to Leader not local for 
> partition [samsung,0] on broker 1 (kafka.server.KafkaApis)
> [2014-06-13 08:35:45,440] INFO [KafkaApi-1] Send the close connection 
> response due to error handling produce request [clientId = , correlationId = 
> 2, topicAndPartition = [samsung,0]] with Ack=0 (kafka.server.KafkaApis)
> OS- Windows 7 , JDK 1.7 , Scala 2.10



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4520) Kafka broker fails with not so user-friendly error msg when log.dirs is not set

2017-08-30 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4520.
--
   Resolution: Fixed
Fix Version/s: 0.11.0.0

> Kafka broker fails with not so user-friendly error msg when log.dirs is not 
> set
> ---
>
> Key: KAFKA-4520
> URL: https://issues.apache.org/jira/browse/KAFKA-4520
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Buchi Reddy B
>Priority: Trivial
> Fix For: 0.11.0.0
>
>
> I tried to bring up a Kafka broker without setting log.dirs property and it 
> has failed with the following error.
> {code:java}
> [2016-12-07 23:41:08,020] INFO KafkaConfig values:
>  advertised.host.name = 100.96.7.10
>  advertised.listeners = null
>  advertised.port = null
>  authorizer.class.name =
>  auto.create.topics.enable = true
>  auto.leader.rebalance.enable = true
>  background.threads = 10
>  broker.id = 0
>  broker.id.generation.enable = false
>  broker.rack = null
>  compression.type = producer
>  connections.max.idle.ms = 60
>  controlled.shutdown.enable = true
>  controlled.shutdown.max.retries = 3
>  controlled.shutdown.retry.backoff.ms = 5000
>  controller.socket.timeout.ms = 3
>  default.replication.factor = 1
>  delete.topic.enable = false
>  fetch.purgatory.purge.interval.requests = 1000
>  group.max.session.timeout.ms = 30
>  group.min.session.timeout.ms = 6000
>  host.name =
>  inter.broker.protocol.version = 0.10.1-IV2
>  leader.imbalance.check.interval.seconds = 300
>  leader.imbalance.per.broker.percentage = 10
>  listeners = PLAINTEXT://0.0.0.0:9092
>  log.cleaner.backoff.ms = 15000
>  log.cleaner.dedupe.buffer.size = 134217728
>  log.cleaner.delete.retention.ms = 8640
>  log.cleaner.enable = true
>  log.cleaner.io.buffer.load.factor = 0.9
>  log.cleaner.io.buffer.size = 524288
>  log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
>  log.cleaner.min.cleanable.ratio = 0.5
>  log.cleaner.min.compaction.lag.ms = 0
>  log.cleaner.threads = 1
>  log.cleanup.policy = [delete]
>  log.dir = /tmp/kafka-logs
>  log.dirs =
>  log.flush.interval.messages = 9223372036854775807
>  log.flush.interval.ms = null
>  log.flush.offset.checkpoint.interval.ms = 6
>  log.flush.scheduler.interval.ms = 9223372036854775807
>  log.index.interval.bytes = 4096
>  log.index.size.max.bytes = 10485760
>  log.message.format.version = 0.10.1-IV2
>  log.message.timestamp.difference.max.ms = 9223372036854775807
>  log.message.timestamp.type = CreateTime
>  log.preallocate = false
>  log.retention.bytes = -1
>  log.retention.check.interval.ms = 30
>  log.retention.hours = 168
>  log.retention.minutes = null
>  log.retention.ms = null
>  log.roll.hours = 168
>  log.roll.jitter.hours = 0
>  log.roll.jitter.ms = null
>  log.roll.ms = null
>  log.segment.bytes = 1073741824
>  log.segment.delete.delay.ms = 6
>  max.connections.per.ip = 2147483647
>  max.connections.per.ip.overrides =
>  message.max.bytes = 112
>  metric.reporters = []
>  metrics.num.samples = 2
>  metrics.sample.window.ms = 3
>  min.insync.replicas = 1
>  num.io.threads = 8
>  num.network.threads = 3
>  num.partitions = 1
>  num.recovery.threads.per.data.dir = 1
>  num.replica.fetchers = 1
>  offset.metadata.max.bytes = 4096
>  offsets.commit.required.acks = -1
>  offsets.commit.timeout.ms = 5000
>  offsets.load.buffer.size = 5242880
>  offsets.retention.check.interval.ms = 60
>  offsets.retention.minutes = 1440
>  offsets.topic.compression.codec = 0
>  offsets.topic.num.partitions = 50
>  offsets.topic.replication.factor = 3
>  offsets.topic.segment.bytes = 104857600
>  port = 9092
>  principal.builder.class = class 
> org.apache.kafka.common.security.auth.DefaultPrincipalBuilder
>  producer.purgatory.purge.interval.requests = 1000
>  queued.max.requests = 500
>  quota.consumer.default = 9223372036854775807
>  quota.producer.default = 9223372036854775807
>  quota.window.num = 11
>  quota.window.size.seconds = 1
>  replica.fetch.backoff.ms = 1000
>  replica.fetch.max.bytes = 1048576
>  replica.fetch.min.bytes = 1
>  replica.fetch.response.max.bytes = 10485760
>  replica.fetch.wait.max.ms = 500
>  replica.high.watermark.checkpoint.interval.ms = 5000
>  replica.lag.time.max.ms = 1
>  replica.socket.receive.buffer.bytes = 65536
>  replica.socket.timeout.ms = 3
>  replication.quota.window.num = 11
>  

[jira] [Resolved] (KAFKA-805) log.append() may fail with a compressed messageset containing no uncompressed messages

2017-09-05 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-805.
-
Resolution: Fixed

Not able to reproduce the issue. looks like this got fixed in newer versions. 

> log.append() may fail with a compressed messageset containing no uncompressed 
> messages 
> ---
>
> Key: KAFKA-805
> URL: https://issues.apache.org/jira/browse/KAFKA-805
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.0
>Reporter: Jun Rao
>Assignee: Jay Kreps
>
> If a broker receives a compressed messageset that contains no uncompressed 
> message, the log.append() logic may fail.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-1455) Expose ConsumerOffsetChecker as an api instead of being command line only

2017-09-07 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1455.
--
Resolution: Fixed

This is has been added to Java Admin API.

> Expose ConsumerOffsetChecker as an api instead of being command line only
> -
>
> Key: KAFKA-1455
> URL: https://issues.apache.org/jira/browse/KAFKA-1455
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Reporter: Arup Malakar
>Priority: Minor
>
> I find ConsumerOffsetChecker very useful when it comes to checking offset/lag 
> for a consumer group. It would be nice if it could be exposed as a class that 
> could be used from other programs instead of being only a command line too.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2022) simpleconsumer.fetch(req) throws a java.nio.channels.ClosedChannelException: null exception when the original leader fails instead of being trapped in the fetchResponse

2017-09-07 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2022.
--
Resolution: Won't Fix

I think we need to catch the exception and retry with a new leader.  Pl reopen 
if you think the issue still exists


> simpleconsumer.fetch(req) throws a java.nio.channels.ClosedChannelException: 
> null exception when the original leader fails instead of being trapped in the 
> fetchResponse api while consuming messages
> -
>
> Key: KAFKA-2022
> URL: https://issues.apache.org/jira/browse/KAFKA-2022
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.8.2.1
> Environment: 3 linux nodes with both zookeepr & brokers running under 
> respective users on each..
>Reporter: Muqeet Mohammed Ali
>Assignee: Neha Narkhede
>
> simpleconsumer.fetch(req) throws a java.nio.channels.ClosedChannelException: 
> null exception when the original leader fails, instead of being trapped in 
> the fetchResponse api while consuming messages. My understanding was that any 
> fetch failures can be found via fetchResponse.hasError() call and then be 
> handled to fetch new leader in this case. Below is the relevant code snippet 
> from the simple consumer with comments marking the line causing 
> exception..can you please comment on this?
> if (simpleconsumer == null) {
>   simpleconsumer = new 
> SimpleConsumer(leaderAddress.getHostName(), leaderAddress.getPort(), 
> consumerTimeout,
>   consumerBufferSize, 
> consumerId);
> }
> FetchRequest req = new FetchRequestBuilder().clientId(getConsumerId())
>   .addFetch(topic, partition, 
> offsetManager.getTempOffset(), consumerBufferSize)
>   // Note: the fetchSize might need to be increased
>   // if large batches are written to Kafka
>   .build();
> // exception is throw at the below line
> FetchResponse fetchResponse = simpleconsumer.fetch(req);
> if (fetchResponse.hasError()) {
>   numErrors++;
> etc...



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-980) Crash during log recovery can cause full recovery to never run

2017-09-07 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-980.
-
Resolution: Fixed

> Crash during log recovery can cause full recovery to never run
> --
>
> Key: KAFKA-980
> URL: https://issues.apache.org/jira/browse/KAFKA-980
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.7.1
>Reporter: Blake Smith
>
> After an unclean shutdown of the Kafka server, if the broker throws an 
> unhandled exception during log recovery, the broker can get in a state where 
> recovery never runs on a log file.
> We saw this problem manifest in production and is summarized on the mailing 
> list here: 
> http://mail-archives.apache.org/mod_mbox/kafka-users/201307.mbox/%3CCAKSpikjgp2sW2ycuf86JrjtAPxWBp92OOEmigVed=u=jfop...@mail.gmail.com%3E
> Because recovery state is not tracked explicitly, our kafka broker started 
> writing data even when the log files were not fully recovered. It feels to me 
> like a separate state flag for recovery should also be tracked in cases where 
> recovery does not fully run. What do you guys think?
> Steps to reproduce:
> 1. Shutdown the kafka broker
> 2. Create a directory named 'bogus' under the kafka log directory (won't 
> parse since it has no partition number)
> 3. Remove .kafka_cleanshutdown from the log directory to force a recovery
> 4. Start the kafka broker, observe:
> - Recovery will run on partition segments until it reaches the bogus 
> directory
> - Exception will be thrown during log loading from the bogus directory
> - Kafka will initiate a clean shutdown after the exception is thrown
> 5. Once the Kafka server is cleanly shutdown, start it again, observe:
> - Recovery will not try to run, since kafka was shutdown cleanly
> - Some partition log files have never been recovered
> 6. Remove the bogus log directory
> 7. Start Kafka broker, observe:
> - Recovery will not run
> - Kafka will start cleanly and begin accepting writes again, even though 
> recovery has never run and logs might be in a corrupt state



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-641) ConsumerOffsetChecker breaks when using dns names x.b.com as opposed to raw public IP for broker.

2017-09-07 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-641.
-
Resolution: Fixed

This is not relevant in newer versions.

> ConsumerOffsetChecker breaks when using dns names x.b.com as opposed to raw 
> public IP for broker.
> -
>
> Key: KAFKA-641
> URL: https://issues.apache.org/jira/browse/KAFKA-641
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.7.2
>Reporter: Juan Valencia
>Priority: Minor
>  Labels: offset
>
> private val BrokerIpPattern = """.*:(\d+\.\d+\.\d+\.\d+):(\d+$)""".r
>   // e.g., 127.0.0.1-1315436360737:127.0.0.1:9092
> if you have a domain name (such as ec2 instance: 
> ec2-111-11-111-111.compute-1.amazonaws.com), the script breaks



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-47) Create topic support and new ZK data structures for intra-cluster replication

2017-09-07 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-47?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-47.

Resolution: Fixed

Closing this umbrella JIRA as all tasks are resolved.

> Create topic support and new ZK data structures for intra-cluster replication
> -
>
> Key: KAFKA-47
> URL: https://issues.apache.org/jira/browse/KAFKA-47
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jun Rao
>
> We need the DDL syntax for creating new topics. May need to use things like 
> javaCC. Also, we need to register new data structures in ZK accordingly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2166) Recreation breaks topic-list

2017-09-07 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2166.
--
Resolution: Fixed

Deletion related issues are fixed in newer versions.  Pl reopen if you think 
the issue still exists


> Recreation breaks topic-list
> 
>
> Key: KAFKA-2166
> URL: https://issues.apache.org/jira/browse/KAFKA-2166
> Project: Kafka
>  Issue Type: Bug
>Reporter: Thomas Zimmer
>
> Hi here are the steps the reproduce the issue:
> * Create a topic called "test"
> * Delete the topic "test"
> * Recreate the topic "test" 
> What will happen is that you will see the topic in the topic-list but it's 
> marked as deleted:
>  ./kafka-topics.sh --list --zookeeper zookpeer1.dev, zookeeper2.dev
> test - marked for deletion
> Is there a way to fix it without having to delete everything? We also tried 
> several restarts



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2595) Processor thread dies due to an uncaught NoSuchElementException

2017-09-07 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2595.
--
Resolution: Fixed

See the discussion in KAFKA-1804

> Processor thread dies due to an uncaught NoSuchElementException
> ---
>
> Key: KAFKA-2595
> URL: https://issues.apache.org/jira/browse/KAFKA-2595
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Shaun Senecal
> Attachments: server.log.2015-09-23-09.gz
>
>
> We are getting uncaught exceptions which seem to kill the processor threads.  
> The end result is that we end up with a bunch of connections in CLOSE_WAIT 
> and eventually the broker is unable to respond or hits the max open files 
> ulimit.
> {noformat}
> [2015-09-23 09:54:33,687] ERROR Uncaught exception in thread 
> 'kafka-network-thread-9092-2': (kafka.utils.Utils$)
> java.util.NoSuchElementException: None.get
> at scala.None$.get(Option.scala:347)
> at scala.None$.get(Option.scala:345)
> at kafka.network.ConnectionQuotas.dec(SocketServer.scala:524)
> at kafka.network.AbstractServerThread.close(SocketServer.scala:165)
> at kafka.network.AbstractServerThread.close(SocketServer.scala:157)
> at kafka.network.Processor.close(SocketServer.scala:374)
> at kafka.network.Processor.processNewResponses(SocketServer.scala:406)
> at kafka.network.Processor.run(SocketServer.scala:318)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> The issue appears to be the same as KAFKA-1577, except that its not happening 
> during shutdown.  We haven't been able to isolate when this happens, so we 
> dont have a good way to reproduce the issue.
> It also looks like KAFKA-2353 would work around the issue if it could be 
> back-ported, but the root cause should probably be fixed as well.
> - java version: 1.7.0_65
> - kafka version: 0.8.2.0
> - topics: 366
> - partitions: ~550 (a few 20 partition topics, and a bunch of 1 partition 
> topics)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-252) Generalize getOffsetsBefore API to a new more general API getLeaderMetadata

2017-09-07 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-252.
-
Resolution: Fixed

These API were handled in newer versions.

> Generalize getOffsetsBefore API to a new more general API getLeaderMetadata
> ---
>
> Key: KAFKA-252
> URL: https://issues.apache.org/jira/browse/KAFKA-252
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.8.0
>Reporter: Neha Narkhede
>  Labels: project
>
> The relevant discussion is here - 
> https://issues.apache.org/jira/browse/KAFKA-238?focusedCommentId=13191350=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13191350
>  and on KAFKA-642
> We have an api that gets cluster-wide metadata (getTopicmetdata) which we use 
> for bootstraping knowledge about the cluster. But some things can only be 
> fetched from the leader for the partition.
> With replication, the metadata about log segments can only be returned by a 
> broker that hosts that partition locally. It will be good to expose log 
> segment metadata through a more general replica metadata API that in addition 
> to returning offsets, also returns other metadata like - number of log 
> segments, total size, last modified timestamp, highwater mark, and log end 
> offset.
> It would be good to do a wiki design on this and get consensus on that first 
> since this would be a public api.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-1563) High packet rate between brokers in kafka cluster.

2017-09-07 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1563.
--
Resolution: Cannot Reproduce

 Pl reopen if you think the issue still exists


> High packet rate between brokers in kafka cluster.
> --
>
> Key: KAFKA-1563
> URL: https://issues.apache.org/jira/browse/KAFKA-1563
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.1.1
>Reporter: Fedor Korotkiy
>
> On our kafka cluster with 3 brokers and input 40MB/s we see about 100K 
> packets/s traffic between brokers(not including consumers). Majority of 
> packets have small size(about 20bytes of data).
> I have found that kafka server sets TcpNoDelay option on all sockets. 
> https://github.com/apache/kafka/blob/0.8.1/core/src/main/scala/kafka/network/SocketServer.scala#L202
> And I think that causes the issue.
> Can you please explain current behavior and fix it/make it configurable?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2623) Kakfa broker not deleting logs after configured retention time properly

2017-09-07 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2623.
--
Resolution: Fixed

 Time-based log retention is enforced in KIP-33.  Pl reopen if you think the 
issue still exists


> Kakfa broker not deleting logs after configured retention time properly
> ---
>
> Key: KAFKA-2623
> URL: https://issues.apache.org/jira/browse/KAFKA-2623
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.8.2.0
> Environment: DISTRIB_ID=Ubuntu
> DISTRIB_RELEASE=12.04
> DISTRIB_CODENAME=precise
> DISTRIB_DESCRIPTION="Ubuntu 12.04.5 LTS"
> NAME="Ubuntu"
> VERSION="12.04.5 LTS, Precise Pangolin"
> ID=ubuntu
> ID_LIKE=debian
> PRETTY_NAME="Ubuntu precise (12.04.5 LTS)"
> VERSION_ID="12.04"
>Reporter: Hao Zhang
>Assignee: Jay Kreps
>
> Context:
> To get an accurate estimate on how much retention we have for each 
> topic/partition, we have a cron job iterating each topic/partition folder on 
> each node of a cluster, measuring the timestamp difference between the newest 
> and oldest log files. 
> Problem:
> We notice that it's very common that between leaders and followers, the time 
> differences are vastly different. On the leader the timestamp differences are 
> normally about a week (our retention policy), but on the follower the 
> timestamp differences can sometimes range between just a few hours to 2-3 
> days.
> -rw-rw-r-- 1 kloak kloak 256M Oct  4 05:48 001536840178.log
> -rw-rw-r-- 1 kloak kloak 256M Oct  4 05:48 001537497855.log
> -rw-rw-r-- 1 kloak kloak 256M Oct  4 05:48 001538155208.log
> -rw-rw-r-- 1 kloak kloak 256M Oct  4 05:48 001538811692.log
> -rw-rw-r-- 1 kloak kloak 256M Oct  4 05:48 001539468154.log
> -rw-rw-r-- 1 kloak kloak 256M Oct  4 05:48 001540122891.log
> -rw-rw-r-- 1 kloak kloak 256M Oct  4 05:48 001540775681.log
> -rw-rw-r-- 1 kloak kloak 256M Oct  4 05:48 001541430669.log
> -rw-rw-r-- 1 kloak kloak 256M Oct  4 05:48 001542088333.log
> -rw-rw-r-- 1 kloak kloak 256M Oct  4 05:48 001542746722.log
> -rw-rw-r-- 1 kloak kloak 256M Oct  4 05:49 001543405006.log
> -rw-rw-r-- 1 kloak kloak 256M Oct  4 05:49 001544062197.log
> -rw-rw-r-- 1 kloak kloak 256M Oct  4 05:49 001544718413.log
> -rw-rw-r-- 1 kloak kloak 256M Oct  4 05:49 001545374173.log
> -rw-rw-r-- 1 kloak kloak 256M Oct  4 05:49 001546029145.log
> -rw-rw-r-- 1 kloak kloak 256M Oct  4 05:49 001546686144.log
> -rw-rw-r-- 1 kloak kloak 256M Oct  4 05:49 001547344190.log
> -rw-rw-r-- 1 kloak kloak 256M Oct  4 05:49 001548001698.log
> -rw-rw-r-- 1 kloak kloak 256M Oct  4 05:49 001548657672.log
> -rw-rw-r-- 1 kloak kloak 256M Oct  4 05:49 001549312958.log
> -rw-rw-r-- 1 kloak kloak 256M Oct  4 05:49 001549969014.log
> -rw-rw-r-- 1 kloak kloak 256M Oct  4 05:50 001550623380.log
> -rw-rw-r-- 1 kloak kloak 256M Oct  4 05:50 001551279821.log
> -rw-rw-r-- 1 kloak kloak 256M Oct  4 05:50 001551937920.log
> -rw-rw-r-- 1 kloak kloak 256M Oct  4 05:50 001552597354.log
> -rw-rw-r-- 1 kloak kloak 256M Oct  4 05:50 001553256336.log
> -rw-rw-r-- 1 kloak kloak 256M Oct  4 05:50 001553914505.log
> -rw-rw-r-- 1 kloak kloak 256M Oct  4 05:50 001554571426.log
> -rw-rw-r-- 1 kloak kloak 256M Oct  4 05:50 001555228277.log
> -rw-rw-r-- 1 kloak kloak 256M Oct  4 05:50 001555882081.log
> -rw-rw-r-- 1 kloak kloak 256M Oct  4 05:50 001556538902.log
> -rw-rw-r-- 1 kloak kloak 256M Oct  4 05:50 001557196332.log
> -rw-rw-r-- 1 kloak kloak 256M Oct  4 05:51 001557852974.log
> -rw-rw-r-- 1 kloak kloak 256M Oct  4 05:51 001558510709.log
> -rw-rw-r-- 1 kloak kloak 256M Oct  4 05:51 001559166839.log
> -rw-rw-r-- 1 kloak kloak 256M Oct  4 05:51 001559823667.log
> -rw-rw-r-- 1 kloak kloak 256M Oct  4 05:51 001560478631.log
> -rw-rw-r-- 1 kloak kloak 256M Oct  4 05:51 001561136505.log
> -rw-rw-r-- 1 kloak kloak 256M Oct  4 05:51 00156179.log
> -rw-rw-r-- 1 kloak kloak 256M Oct  4 05:51 001562450149.log
> -rw-rw-r-- 1 kloak kloak 256M Oct  4 05:51 001563107321.log
> -rw-rw-r-- 1 kloak kloak 256M Oct  4 05:51 001563763826.log
> -rw-rw-r-- 1 kloak kloak 256M Oct  4 05:52 001564420526.log
> -rw-rw-r-- 1 kloak kloak 256M Oct  4 05:52 001565076456.log
> -rw-rw-r-- 1 kloak kloak 256M Oct  4 05:52 001565735877.log
> -rw-rw-r-- 1 kloak kloak 256M Oct  4 05:52 001566394151.log
> -rw-rw-r-- 1 kloak kloak 256M Oct  4 05:52 001567051743.log
> -rw-rw-r-- 1 kloak kloak 256M Oct  4 05:52 001567709678.log
> 

[jira] [Resolved] (KAFKA-3113) Kafka simple consumer inconsistent result

2017-09-07 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3113.
--
Resolution: Cannot Reproduce

may be seed broker is different for different topics. simpleconsumer should 
point to leader broker of the topic. Please reopen if the issue still exists. 


> Kafka simple consumer inconsistent result
> -
>
> Key: KAFKA-3113
> URL: https://issues.apache.org/jira/browse/KAFKA-3113
> Project: Kafka
>  Issue Type: Bug
>Reporter: Goutam Chowdhury
>
> I am trying to read kafka messages using spart api in batch mode.To achieve  
> this , I need start and last offset of the mentioned topic. To get start and 
> last offset I am creating simple consumer with below code
> -
> var consumer = new SimpleConsumer(seedBroker, seedBrokerPort, 10, 64 * 
> 1024, clientName);
> var topicAndPartition = new TopicAndPartition(topic, partition.toInt)
> var requestInfo = new HashMap[TopicAndPartition,PartitionOffsetRequestInfo];
>  requestInfo.put(topicAndPartition, new PartitionOffsetRequestInfo(whichTime, 
> 1))
>  logger.info("requestInfo - " + requestInfo);
> var request = new kafka.javaapi.OffsetRequest(requestInfo, 
> OffsetRequest.CurrentVersion, clientName);
> var response =consumer.getOffsetsBefore(request);
> ---
> I am using BIG IP addresses for seed broker  which actually has four brokers 
> . Now if I do spark submit then some times it get exception with below 
> response 
> response - OffsetResponse(0,Map([ecp.demo.patents.control,0] -> error: 
> kafka.common.UnknownTopicOrPartitionException offsets: ))
> but some time i get proper response. could anybody explain why it is like 
> that? 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-1197) Count of bytes or messages of a topic stored in kafka

2017-09-07 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1197.
--
Resolution: Fixed

Size, LogStartOffset, LogEndOffset are exposed as metrics in newer verions.

> Count of bytes or messages of a topic stored in kafka
> -
>
> Key: KAFKA-1197
> URL: https://issues.apache.org/jira/browse/KAFKA-1197
> Project: Kafka
>  Issue Type: New Feature
>  Components: core
>Affects Versions: 0.7.2, 0.8.0
>Reporter: Hanish Bansal
>Priority: Minor
>
> There should be direct way of measuring count of messages or bytes for a 
> topic stored in Kafka.
> There are already some very useful metrics like byteRate and messageRate 
> using what we can see count of bytes/messages coming into Kafka broker.
> I was looking for some jmx metrics that can give count of messages/bytes 
> stored in kafka.
> If we look into data stores like hbase we can see  how many messages are  
> stored in hbase or if we look into search engine like elasticsearch then also 
> we can see how many messages are stored/indexed in elasticsearch. In similar 
> way i was expecting that there should be some way to see count of  messages 
> or bytes for a topic stored in kafka without using any external tool.
> It will be really helpful if there is some support for this using some jmx 
> metric or by script.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-986) Topic Consumption Across multiple instances of consumer groups

2017-09-07 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-986.
-
Resolution: Cannot Reproduce

> Topic Consumption Across multiple instances of consumer groups
> --
>
> Key: KAFKA-986
> URL: https://issues.apache.org/jira/browse/KAFKA-986
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.8.0
> Environment: Linux
>Reporter: Subbu Srinivasan
>Assignee: Neha Narkhede
>
> Folks,
> How can we simulate the notion of queues for consumers from multiple 
> instances?
> For eg: I have a topic log.
> From a single machine ( I tried from different machines also) I started two 
> consumers on same topic with same group id. Both the consumers get copes of 
> messages. 
> bin/kafka-console-consumer.sh --zookeeper kafka1:2181  --topic log --group 1
> bin/kafka-console-consumer.sh --zookeeper kafka1:2181  --topic log --group 1
> From the design section at http://kafka.apache.org/design.html
> 
> Each consumer process belongs to a consumer group and each message is 
> delivered to exactly one process within every consumer group. Hence a 
> consumer group allows many processes or machines to logically act as a single 
> consumer. The concept of consumer group is very powerful and can be used to 
> support the semantics of either a queue or topic as found in JMS. To support 
> queue semantics, we can put all consumers in a single consumer group, in 
> which case each message will go to a single consumer. 
> 
> Can someone elaborate on this?
> Thanks



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-1093) Log.getOffsetsBefore(t, …) does not return the last confirmed offset before t

2017-09-07 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1093.
--
Resolution: Invalid

relavent part of the code is not available now. So closing this now.

> Log.getOffsetsBefore(t, …) does not return the last confirmed offset before t
> -
>
> Key: KAFKA-1093
> URL: https://issues.apache.org/jira/browse/KAFKA-1093
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.0
>Reporter: Swapnil Ghike
>Assignee: Swapnil Ghike
> Attachments: KAFKA-1093.patch
>
>
> Let's say there are three log segments s1, s2, s3.
> In Log.getoffsetsBefore(t, …), the offsetTimeArray will look like - 
> [(s1.start, s1.lastModified), (s2.start, s2.lastModified), (s3.start, 
> s3.lastModified), (logEndOffset, currentTimeMs)].
> Let's say s2.lastModified < t < s3.lastModified. getOffsetsBefore(t, 1) will 
> return Seq(s2.start).
> However, we already know s3.firstAppendTime (s3.created in trunk). So, if 
> s3.firstAppendTime < t < s3.lastModified, we should rather return s3.start. 
> This also resolves another bug wherein the log has only one segment and 
> getOffsetsBefore() returns an empty Seq if the timestamp provided is less 
> than the lastModified of the only segment. We should rather return the 
> startOffset of the segment if the timestamp is greater than the 
> firstAppendTime of the segment.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-1993) Enable topic deletion as default

2017-09-07 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1993.
--
Resolution: Fixed

Resolving this as duplicate of KAFKA-5384

> Enable topic deletion as default
> 
>
> Key: KAFKA-1993
> URL: https://issues.apache.org/jira/browse/KAFKA-1993
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
> Attachments: KAFKA-1993.patch
>
>
> Since topic deletion is now throughly tested and works as well as most Kafka 
> features, we should enable it by default.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-3033) Reassigning partition stuck in progress

2017-09-07 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3033.
--
Resolution: Duplicate

 Resolving this as a duplicate of KAFKA-4914. 

> Reassigning partition stuck in progress
> ---
>
> Key: KAFKA-3033
> URL: https://issues.apache.org/jira/browse/KAFKA-3033
> Project: Kafka
>  Issue Type: Bug
>  Components: replication
>Affects Versions: 0.9.0.0
> Environment: centos 7.2
>Reporter: Leo Xuzhang Lin
>Assignee: Neha Narkhede
>Priority: Critical
>  Labels: reliability
>
> We were trying to increase the replication factor on a test topic we've 
> created. 
> We followed the documentation's instruction:
> http://kafka.apache.org/documentation.html#basic_ops_increase_replication_factor
> and received:
> ```
> Current partition replica assignment
> {"version":1,"partitions":[{"topic":"test","partition":0,"replicas":[1]}]}
> Save this to use as the --reassignment-json-file option during rollback
> Successfully started reassignment of partitions 
> {"version":1,"partitions":[{"topic":"test"
> ,"partition":0,"replicas":["1","2","3"]}]}
> ```
> After that whenever we try verify, it is stuck on:
> ```
> Status of partition reassignment:
> Reassignment of partition [test,0] is still in progress
> ```
> - We tried restarting the cluster and it still did not work.
> - The topic has 1 partition
> - The zookeeper /admin/reassign_partitions znode is empty



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


  1   2   3   4   5   6   >