[jira] [Assigned] (KAFKA-4541) Add capability to create delegation token

2017-08-03 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-4541:


Assignee: Manikumar  (was: Ashish Singh)

> Add capability to create delegation token
> -
>
> Key: KAFKA-4541
> URL: https://issues.apache.org/jira/browse/KAFKA-4541
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ashish Singh
>Assignee: Manikumar
>
> Add request/ response and server side handling to create delegation tokens.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5647) Use async ZookeeperClient for Admin operations

2017-08-04 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16114219#comment-16114219
 ] 

Manikumar commented on KAFKA-5647:
--

[~ijuma] [~onurkaraman]   Are we going to refactor ZkUtils, 
ZkNodeChangeNotificationListener etc.. to use async ZookeeperClient .  I have 
used zkUtils , ZkNodeChangeNotificationListener in my delegation token work.  I 
am planning to use async ZookeeperClient but I think we need to migrate 
existing utilities. Right? 

> Use async ZookeeperClient for Admin operations
> --
>
> Key: KAFKA-5647
> URL: https://issues.apache.org/jira/browse/KAFKA-5647
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (KAFKA-5647) Use async ZookeeperClient for Admin operations

2017-08-04 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16114219#comment-16114219
 ] 

Manikumar edited comment on KAFKA-5647 at 8/4/17 10:50 AM:
---

[~ijuma] [~onurkaraman]   Are we going to refactor ZkUtils, 
ZkNodeChangeNotificationListener etc.. to use async ZookeeperClient .  I have 
used zkUtils , ZkNodeChangeNotificationListener in my delegation token work.  I 
am planning to use async ZookeeperClient but I think we need to first migrate 
existing utilities. Right? 


was (Author: omkreddy):
[~ijuma] [~onurkaraman]   Are we going to refactor ZkUtils, 
ZkNodeChangeNotificationListener etc.. to use async ZookeeperClient .  I have 
used zkUtils , ZkNodeChangeNotificationListener in my delegation token work.  I 
am planning to use async ZookeeperClient but I think we need to migrate 
existing utilities. Right? 

> Use async ZookeeperClient for Admin operations
> --
>
> Key: KAFKA-5647
> URL: https://issues.apache.org/jira/browse/KAFKA-5647
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5714) Allow whitespaces in the principal name

2017-08-15 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127513#comment-16127513
 ] 

Manikumar commented on KAFKA-5714:
--

 ZK based topic creation/deletion doesn't go through ACL authorization. Not 
sure how these are related. You can enable authorizer logs and to verify any 
deny operations.

> Allow whitespaces in the principal name
> ---
>
> Key: KAFKA-5714
> URL: https://issues.apache.org/jira/browse/KAFKA-5714
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.10.2.1
>Reporter: Alla Tumarkin
>Assignee: Manikumar
>
> Request
> Improve parser behavior to allow whitespaces in the principal name in the 
> config file, as in:
> {code}
> super.users=User:CN=Unknown, OU=Unknown, O=Unknown, L=Unknown, ST=Unknown, 
> C=Unknown
> {code}
> Background
> Current implementation requires that there are no whitespaces after commas, 
> i.e.
> {code}
> super.users=User:CN=Unknown,OU=Unknown,O=Unknown,L=Unknown,ST=Unknown,C=Unknown
> {code}
> Note: having a semicolon at the end doesn't help, i.e. this does not work 
> either
> {code}
> super.users=User:CN=Unknown, OU=Unknown, O=Unknown, L=Unknown, ST=Unknown, 
> C=Unknown;
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-1821) Example shell scripts broken

2017-08-15 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1821.
--
Resolution: Fixed

It is working in newer Kafka versions.

> Example shell scripts broken
> 
>
> Key: KAFKA-1821
> URL: https://issues.apache.org/jira/browse/KAFKA-1821
> Project: Kafka
>  Issue Type: Bug
>  Components: packaging, tools
>Affects Versions: 0.8.1.1
> Environment: Ubuntu 14.04, Linux 75477193b766 3.13.0-24-generic 
> #46-Ubuntu SMP Thu Apr 10 19:11:08 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux. 
> Scala: 2.8.0
>Reporter: Yong Fu
>Priority: Minor
>
> After run ./gradlew jarAll to generate all jars including for examples, I try 
> to run the producer-consumer demo from shell scripts. But it doesn't work and 
> throw  ClassNotFoundException.  It seems the shell scripts 
> (java-producer-consumer-demo and java-simple-consumer-demo) still work on the 
> library structure for sbt. So it cannot find jar files under new structure 
> forced by gradle. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-2206) Add AlterConfig and DescribeConfig requests to Kafka

2017-08-15 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127676#comment-16127676
 ] 

Manikumar commented on KAFKA-2206:
--

this looks like is a duplicate of KIP-133/ KAFKA-3267. If so, we can close this 
jira.
cc [~ijuma] 

> Add AlterConfig and DescribeConfig requests to Kafka
> 
>
> Key: KAFKA-2206
> URL: https://issues.apache.org/jira/browse/KAFKA-2206
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>
> See 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-21+-+Dynamic+Configuration#KIP-21-DynamicConfiguration-ConfigAPI



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-1832) Async Producer will cause 'java.net.SocketException: Too many open files' when broker host does not exist

2017-08-15 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1832.
--
Resolution: Fixed

Fixed in  KAFKA-1041

> Async Producer will cause 'java.net.SocketException: Too many open files' 
> when broker host does not exist
> -
>
> Key: KAFKA-1832
> URL: https://issues.apache.org/jira/browse/KAFKA-1832
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.1, 0.8.1.1
> Environment: linux
>Reporter: barney
>Assignee: Jun Rao
>
> h3.How to replay the problem:
> * producer configuration:
> ** producer.type=async
> ** metadata.broker.list=not.existed.com:9092
> Make sure the host '*not.existed.com*' does not exist in DNS server or 
> /etc/hosts;
> * send a lot of messages continuously using the above producer
> It will cause '*java.net.SocketException: Too many open files*' after a 
> while, or you can use '*lsof -p $pid|wc -l*' to check the count of open files 
> which will be increasing as time goes by until it reaches the system 
> limit(check by '*ulimit -n*').
> h3.Problem cause:
> {code:title=kafka.network.BlockingChannel|borderStyle=solid} 
> channel.connect(new InetSocketAddress(host, port))
> {code}
> this line will throw an exception 
> '*java.nio.channels.UnresolvedAddressException*' when broker host does not 
> exist, and at this same time the field '*connected*' is false;
> In *kafka.producer.SyncProducer*, '*disconnect()*' will not invoke 
> '*blockingChannel.disconnect()*' because '*blockingChannel.isConnected*' is 
> false which means the FileDescriptor will be created but never closed;
> h3.More:
> When the broker is an non-existent ip(for example: 
> metadata.broker.list=1.1.1.1:9092) instead of an non-existent host, the 
> problem will not appear;
> In *SocketChannelImpl.connect()*, '*Net.checkAddress()*' is not in try-catch 
> block but '*Net.connect()*' is in, that makes the difference;
> h3.Temporary Solution:
> {code:title=kafka.network.BlockingChannel|borderStyle=solid} 
> try
> {
> channel.connect(new InetSocketAddress(host, port))
> }
> catch
> {
> case e: UnresolvedAddressException => 
> {
> disconnect();
> throw e
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2220) Improvement: Could we support rewind by time ?

2017-08-15 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2220.
--
Resolution: Fixed

This got fixed in  KAFKA-4743 / KIP-122.

> Improvement: Could we support  rewind by time  ?
> 
>
> Key: KAFKA-2220
> URL: https://issues.apache.org/jira/browse/KAFKA-2220
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.1.1
>Reporter: Li Junjun
> Attachments: screenshot.png
>
>
> Improvement: Support  rewind by time  !
> My scenarios as follow:
>A program read record from kafka  and process  then write to a dir in 
> HDFS like /hive/year=/month=xx/day=xx/hour=10 .  If  the program goes 
> down . I can restart it , so it read from last offset . 
> But  what if the program was config with wrong params , so I need remove  
> dir hour=10 and reconfig my program and  I  need to find  the offset where 
> hour=10 start  , but now I can't do this.
> And there are many  scenarios like this.
> so , can we  add  a time  partition , so  we can rewind by time ?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5714) Allow whitespaces in the principal name

2017-08-15 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127737#comment-16127737
 ] 

Manikumar commented on KAFKA-5714:
--

By default, the SSL user name will be of the form 
"CN=writeuser,OU=Unknown,O=Unknown,L=Unknown,ST=Unknown,C=Unknown" , without 
any spaces.
Are you sure of spaces in your SSL username?

> Allow whitespaces in the principal name
> ---
>
> Key: KAFKA-5714
> URL: https://issues.apache.org/jira/browse/KAFKA-5714
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.10.2.1
>Reporter: Alla Tumarkin
>Assignee: Manikumar
>
> Request
> Improve parser behavior to allow whitespaces in the principal name in the 
> config file, as in:
> {code}
> super.users=User:CN=Unknown, OU=Unknown, O=Unknown, L=Unknown, ST=Unknown, 
> C=Unknown
> {code}
> Background
> Current implementation requires that there are no whitespaces after commas, 
> i.e.
> {code}
> super.users=User:CN=Unknown,OU=Unknown,O=Unknown,L=Unknown,ST=Unknown,C=Unknown
> {code}
> Note: having a semicolon at the end doesn't help, i.e. this does not work 
> either
> {code}
> super.users=User:CN=Unknown, OU=Unknown, O=Unknown, L=Unknown, ST=Unknown, 
> C=Unknown;
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2283) scheduler exception on non-controller node when shutdown

2017-08-15 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2283.
--
Resolution: Fixed

> scheduler exception on non-controller node when shutdown
> 
>
> Key: KAFKA-2283
> URL: https://issues.apache.org/jira/browse/KAFKA-2283
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.8.2.1
> Environment: linux debian
>Reporter: allenlee
>Assignee: Neha Narkhede
>Priority: Minor
>
> When broker shutdown, there is an error log about 'Kafka scheduler has not 
> been started'.
> It only appears on non-controller node. If this broker is the controller, it 
> shutdown without warning log.
> IMHO, *autoRebalanceScheduler.shutdown()* should only valid for controller, 
> right?
> {quote}
> [2015-06-17 22:32:51,814] INFO Shutdown complete. (kafka.log.LogManager)
> [2015-06-17 22:32:51,815] WARN Kafka scheduler has not been started 
> (kafka.utils.Utils$)
> java.lang.IllegalStateException: Kafka scheduler has not been started
> at kafka.utils.KafkaScheduler.ensureStarted(KafkaScheduler.scala:114)
> at kafka.utils.KafkaScheduler.shutdown(KafkaScheduler.scala:86)
> at 
> kafka.controller.KafkaController.onControllerResignation(KafkaController.scala:350)
> at 
> kafka.controller.KafkaController.shutdown(KafkaController.scala:664)
> at 
> kafka.server.KafkaServer$$anonfun$shutdown$8.apply$mcV$sp(KafkaServer.scala:285)
> at kafka.utils.Utils$.swallow(Utils.scala:172)
> at kafka.utils.Logging$class.swallowWarn(Logging.scala:92)
> at kafka.utils.Utils$.swallowWarn(Utils.scala:45)
> at kafka.utils.Logging$class.swallow(Logging.scala:94)
> at kafka.utils.Utils$.swallow(Utils.scala:45)
> at kafka.server.KafkaServer.shutdown(KafkaServer.scala:285)
> at 
> kafka.server.KafkaServerStartable.shutdown(KafkaServerStartable.scala:42)
> at kafka.Kafka$$anon$1.run(Kafka.scala:42)
> [2015-06-17 22:32:51,818] INFO Terminate ZkClient event thread. 
> (org.I0Itec.zkclient.ZkEventThread)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-3087) Fix documentation for retention.ms property and update documentation for LogConfig.scala class

2017-08-16 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3087.
--
Resolution: Fixed

This doc issue was fixed in newer Kafka versions.

> Fix documentation for retention.ms property and update documentation for 
> LogConfig.scala class
> --
>
> Key: KAFKA-3087
> URL: https://issues.apache.org/jira/browse/KAFKA-3087
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Reporter: Raju Bairishetti
>Assignee: Jay Kreps
>Priority: Critical
>  Labels: documentation
>
> Log retention settings can be set it in broker and some properties can be 
> overriden at topic level. 
> |Property |Default|Server Default property| Description|
> |retention.ms|7 days|log.retention.minutes|This configuration controls the 
> maximum time we will retain a log before we will discard old log segments to 
> free up space if we are using the "delete" retention policy. This represents 
> an SLA on how soon consumers must read their data.|
> But retention.ms is in milli seconds not in minutes. So corresponding *Server 
> Default property* should be *log.retention.ms* instead of 
> *log.retention.minutes*.
> It would be better if we mention the if the time age is in 
> millis/minutes/hours in the documentation page and documenting in code as 
> well (Right now, it is saying *age in the code*. We should specify the *age 
> in time granularity).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5714) Allow whitespaces in the principal name

2017-08-16 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128831#comment-16128831
 ] 

Manikumar commented on KAFKA-5714:
--

>>The point is, that I am expecting the same behavior, whether I put this name 
>>in server.properties with spaces, or without.
Ok..I got your point, but why are we expecting same behavior?  KafkaPricipal is 
formed from the name of the principal rececived from the underlying channel. In 
the case of SSL, it is string representation of the X.500 certificate.  This is 
comma separated attribute key/values string without any spaces. So we expect 
the same string to used in configs(super.users) and scripts (kafka-acls.sh). we 
also have PrincipalBuilder interface for any customization.

Not sure we want to trim white spaces from the principal name. let us hear 
others opinions on this. 


> Allow whitespaces in the principal name
> ---
>
> Key: KAFKA-5714
> URL: https://issues.apache.org/jira/browse/KAFKA-5714
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.10.2.1
>Reporter: Alla Tumarkin
>Assignee: Manikumar
>
> Request
> Improve parser behavior to allow whitespaces in the principal name in the 
> config file, as in:
> {code}
> super.users=User:CN=Unknown, OU=Unknown, O=Unknown, L=Unknown, ST=Unknown, 
> C=Unknown
> {code}
> Background
> Current implementation requires that there are no whitespaces after commas, 
> i.e.
> {code}
> super.users=User:CN=Unknown,OU=Unknown,O=Unknown,L=Unknown,ST=Unknown,C=Unknown
> {code}
> Note: having a semicolon at the end doesn't help, i.e. this does not work 
> either
> {code}
> super.users=User:CN=Unknown, OU=Unknown, O=Unknown, L=Unknown, ST=Unknown, 
> C=Unknown;
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5734) Heap (Old generation space) gradually increase

2017-08-15 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16127078#comment-16127078
 ] 

Manikumar commented on KAFKA-5734:
--

We can use JMAP command to output a histogram of java object heap. This will 
help us to analyze the heap memory usage.
Take periodic outputs and compare the outputs.

{quote}jdk/bin/jmap -histo:live PID{quote}


> Heap (Old generation space) gradually increase
> --
>
> Key: KAFKA-5734
> URL: https://issues.apache.org/jira/browse/KAFKA-5734
> Project: Kafka
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 0.10.2.0
> Environment: ubuntu 14.04 / java 1.7.0
>Reporter: jang
> Attachments: heap-log.xlsx
>
>
> I set up kafka server on ubuntu with 4GB ram.
> Heap ( Old generation space ) size is increasing gradually like attached 
> excel file which recorded gc info in 1 minute interval.
> Finally OU occupies 2.6GB and GC expend too much time ( And out of memory 
> exception )
> kafka process argumens are below.
> _java -Xmx3000M -Xms2G -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 
> -XX:InitiatingHeapOccupancyPercent=35 -XX:+DisableExplicitGC 
> -Djava.awt.headless=true 
> -Xloggc:/usr/local/kafka/bin/../logs/kafkaServer-gc.log -verbose:gc 
> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
> -Dcom.sun.management.jmxremote 
> -Dcom.sun.management.jmxremote.authenticate=false 
> -Dcom.sun.management.jmxremote.ssl=false 
> -Dkafka.logs.dir=/usr/local/kafka/bin/../logs 
> -Dlog4j.configuration=file:/usr/local/kafka/bin/../config/log4j.properties_



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5714) Allow whitespaces in the principal name

2017-08-11 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16123074#comment-16123074
 ] 

Manikumar commented on KAFKA-5714:
--

[~a...@confluent.io] From the code, I can see that we are allowing white spaces 
in the principal name.  Can you explain more about your observation/scenario?

> Allow whitespaces in the principal name
> ---
>
> Key: KAFKA-5714
> URL: https://issues.apache.org/jira/browse/KAFKA-5714
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.10.2.1
>Reporter: Alla Tumarkin
>
> Request
> Improve parser behavior to allow whitespaces in the principal name in the 
> config file, as in:
> {code}
> super.users=User:CN=Unknown, OU=Unknown, O=Unknown, L=Unknown, ST=Unknown, 
> C=Unknown
> {code}
> Background
> Current implementation requires that there are no whitespaces after commas, 
> i.e.
> {code}
> super.users=User:CN=Unknown,OU=Unknown,O=Unknown,L=Unknown,ST=Unknown,C=Unknown
> {code}
> Note: having a semicolon at the end doesn't help, i.e. this does not work 
> either
> {code}
> super.users=User:CN=Unknown, OU=Unknown, O=Unknown, L=Unknown, ST=Unknown, 
> C=Unknown;
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5734) Heap (Old generation space) gradually increase

2017-08-17 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130145#comment-16130145
 ] 

Manikumar commented on KAFKA-5734:
--

Most of the heap entries are related to metrics. May be related to metrics leak 
KAFKA-4629 ? Can you connect Jconsole and verify for any increase in metrics?
Hope you are taking this output when the heap is almost full.
How many brokers/topics/partitions are there in your setup? Are you running any 
consumers?  How much load are you generating?


> Heap (Old generation space) gradually increase
> --
>
> Key: KAFKA-5734
> URL: https://issues.apache.org/jira/browse/KAFKA-5734
> Project: Kafka
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 0.10.2.0
> Environment: ubuntu 14.04 / java 1.7.0
>Reporter: jang
> Attachments: heap-log.xlsx
>
>
> I set up kafka server on ubuntu with 4GB ram.
> Heap ( Old generation space ) size is increasing gradually like attached 
> excel file which recorded gc info in 1 minute interval.
> Finally OU occupies 2.6GB and GC expend too much time ( And out of memory 
> exception )
> kafka process argumens are below.
> _java -Xmx3000M -Xms2G -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 
> -XX:InitiatingHeapOccupancyPercent=35 -XX:+DisableExplicitGC 
> -Djava.awt.headless=true 
> -Xloggc:/usr/local/kafka/bin/../logs/kafkaServer-gc.log -verbose:gc 
> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
> -Dcom.sun.management.jmxremote 
> -Dcom.sun.management.jmxremote.authenticate=false 
> -Dcom.sun.management.jmxremote.ssl=false 
> -Dkafka.logs.dir=/usr/local/kafka/bin/../logs 
> -Dlog4j.configuration=file:/usr/local/kafka/bin/../config/log4j.properties_



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-275) max.message.size is not enforced for compressed messages

2017-08-17 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-275.
-
Resolution: Fixed

This issue is fixed in latest versions.  Please reopen if the issue still 
exists. 


> max.message.size is not enforced for compressed messages
> 
>
> Key: KAFKA-275
> URL: https://issues.apache.org/jira/browse/KAFKA-275
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.7
>Reporter: Neha Narkhede
>
> The max.message.size check is not performed for compressed messages, but only 
> for each message that forms a compressed message. Due to this, even if the 
> max.message.size is set to 1MB, the producer can technically send n 1MB 
> messages as one compressed message. This can cause memory issues on the 
> server as well as deserialization issues on the consumer. The consumer's 
> fetch size has to be > max.message.size in order to be able to read data. If 
> one message is larger than the fetch.size, the consumer will throw an 
> exception and cannot proceed until the fetch.size is increased. 
> Due to this bug, even if the fetch.size > max.message.size, the consumer can 
> still get stuck on a message that is larger than max.message.size.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-359) Add message constructor that takes payload as a byte buffer

2017-08-17 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-359.
-
Resolution: Fixed

This has been fixed in newer Kafka versions.

> Add message constructor that takes payload as a byte buffer
> ---
>
> Key: KAFKA-359
> URL: https://issues.apache.org/jira/browse/KAFKA-359
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.8.0
>Reporter: Chris Riccomini
>
> Currently, if a ByteBuffer is passed into Message(), it treats the buffer as 
> the message's buffer (including magic byte, meta data, etc) rather than the 
> payload. If you wish to construct a Message and provide just the payload, you 
> have to use a byte array, which results in an extra copy if your payload data 
> is already in a byte buffer.
> For optimization, it would be nice to also provide a constructor like:
> this(payload: ByteBuffer, isPayload: Boolean)
> The existing this(buffer: ByteBuffer) constructor could then just be changed 
> to this(buffer, false).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-357) Refactor zookeeper code in KafkaZookeeper into reusable components

2017-08-17 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-357.
-
Resolution: Duplicate

Zookeeper related code is getting refactored in KAFKA-5027/KAFKA-5501

> Refactor zookeeper code in KafkaZookeeper into reusable components 
> ---
>
> Key: KAFKA-357
> URL: https://issues.apache.org/jira/browse/KAFKA-357
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Neha Narkhede
>
> Currently, we stuck a lot of zookeeper code in KafkaZookeeper. This includes 
> leader election, ISR maintenance etc. However, it will be good to wrap up 
> related code in separate components that make logical sense. A good example 
> of this is the ZKQueue data structure. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-196) Topic creation fails on large values

2017-08-17 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-196.
-
Resolution: Fixed

Topic MAX_NAME_LENGTH is set to 249 is newer Kafka verions.

> Topic creation fails on large values
> 
>
> Key: KAFKA-196
> URL: https://issues.apache.org/jira/browse/KAFKA-196
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Pierre-Yves Ritschard
> Attachments: 
> 0001-Set-a-hard-limit-on-topic-width-this-fixes-KAFKA-196.patch
>
>
> Since topic logs are stored in a directory holding the topic's name, creation 
> of the directory might fail for large strings.
> This is not a problem per-se but the exception thrown is rather cryptic and 
> hard to figure out for operations.
> I propose fixing this temporarily with a hard limit of 200 chars for topic 
> names, it would also be possible to hash the topic name.
> Another concern is that the exception raised stops the broker, effectively 
> creating  a simple DoS vector, I'm concerned about how tests or wrong client 
> library usage can take down the whole broker.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-236) Make 'autooffset.reset' accept a delay in addition to {smallest,largest}

2017-08-17 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-236.
-
Resolution: Fixed

This can be achieved by using reset consumer group tool or 
KafkaConsumer.offsetsForTimes api in latest kafka versions.

> Make 'autooffset.reset' accept a delay in addition to {smallest,largest}
> 
>
> Key: KAFKA-236
> URL: https://issues.apache.org/jira/browse/KAFKA-236
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Mathias Herberts
>
> Add the possibilty to specify a delay in ms which would be used when 
> resetting offset.
> This would allow for example a client to specify it would like its offset to 
> be reset to the first offset before/after the current time - the given offset.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-186) no clean way to getCompressionCodec from Java-the-language

2017-08-17 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-186.
-
Resolution: Fixed

CompressionType Java class added in newer Kafka version.

> no clean way to getCompressionCodec from Java-the-language
> --
>
> Key: KAFKA-186
> URL: https://issues.apache.org/jira/browse/KAFKA-186
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.7
>Reporter: Chris Burroughs
>
> The obvious thing fails:
> CompressionCodec.getCompressionCodec(1) results in cannot find symbol
> symbol  : method getCompressionCodec(int)
> location: interface kafka.message.CompressionCodec
> Writing a switch statement with  kafka.message.NoCompressionCodec$.MODULE$ 
> and duplicating the logic in CompressionCodec.getCompressionCodec is no fun, 
> nor is creating a Hashtable just to call Utils.getCompressionCodec.  I'm not 
> sure if there is a magic keyword to make it easy for javac to understand 
> which CompressionCodec I'm referring to.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-276) Enforce max.message.size on the total message size, not just on payload size

2017-08-17 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-276.
-
Resolution: Fixed

This was fixed in newer Kafka versions.

> Enforce max.message.size on the total message size, not just on payload size
> 
>
> Key: KAFKA-276
> URL: https://issues.apache.org/jira/browse/KAFKA-276
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Affects Versions: 0.7
>Reporter: Neha Narkhede
>
> Today, the max.message.size config is enforced only on the payload size of 
> the message. But the actual message size is header size + payload size.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2218) reassignment tool needs to parse and validate the json

2017-08-17 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2218.
--
Resolution: Duplicate

 PR is available for KAFKA-4914.

> reassignment tool needs to parse and validate the json
> --
>
> Key: KAFKA-2218
> URL: https://issues.apache.org/jira/browse/KAFKA-2218
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joe Stein
>Priority: Critical
>
> Ran into a production issue with the broker.id being set to a string instead 
> of integer and the controller had nothing in the log and stayed stuck. 
> Eventually we saw this in the log of the brokers where coming from 
>   
> me11:42 AM
> [2015-05-23 15:41:05,863] 67396362 [ZkClient-EventThread-14-ERROR 
> org.I0Itec.zkclient.ZkEventThread - Error handling event ZkEvent[Data of 
> /admin/reassign_partitions changed sent to 
> kafka.controller.PartitionsReassignedListener@78c6aab8]
> java.lang.ClassCastException: java.lang.String cannot be cast to 
> java.lang.Integer
>  at scala.runtime.BoxesRunTime.unboxToInt(Unknown Source)
>  at 
> kafka.controller.KafkaController$$anonfun$4.apply(KafkaController.scala:579)
> we then had to delete the znode from zookeeper (admin/reassign_partition) and 
> then fix the json and try it again



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-3389) ReplicaStateMachine areAllReplicasForTopicDeleted check not handling well case when there are no replicas for topic

2017-08-10 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3389.
--
Resolution: Won't Fix

As mentioned in the previous comment, this may not be an issue.  Pl reopen if 
still exists

> ReplicaStateMachine areAllReplicasForTopicDeleted check not handling well 
> case when there are no replicas for topic
> ---
>
> Key: KAFKA-3389
> URL: https://issues.apache.org/jira/browse/KAFKA-3389
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.9.0.1
>Reporter: Stevo Slavic
>Assignee: Manikumar
>Priority: Minor
>
> Line ReplicaStateMachine.scala#L285
> {noformat}
> replicaStatesForTopic.forall(_._2 == ReplicaDeletionSuccessful)
> {noformat}
> which is return value of {{areAllReplicasForTopicDeleted}} function/check, 
> probably should better be checking for
> {noformat}
> replicaStatesForTopic.isEmpty || replicaStatesForTopic.forall(_._2 == 
> ReplicaDeletionSuccessful)
> {noformat}
> I noticed it because in controller logs I found entries like:
> {noformat}
> [2016-03-04 13:27:29,115] DEBUG [Replica state machine on controller 1]: Are 
> all replicas for topic foo deleted Map() 
> (kafka.controller.ReplicaStateMachine)
> {noformat}
> even though normally they look like:
> {noformat}
> [2016-03-04 09:33:41,036] DEBUG [Replica state machine on controller 1]: Are 
> all replicas for topic foo deleted Map([Topic=foo,Partition=0,Replica=0] -> 
> ReplicaDeletionStarted, [Topic=foo,Partition=0,Replica=3] -> 
> ReplicaDeletionStarted, [Topic=foo,Partition=0,Replica=1] -> 
> ReplicaDeletionSuccessful) (kafka.controller.ReplicaStateMachine)
> {noformat}
> This may cause topic deletion request never to be cleared from ZK even when 
> topic has been deleted.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-1696) Kafka should be able to generate Hadoop delegation tokens

2017-07-12 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16083972#comment-16083972
 ] 

Manikumar commented on KAFKA-1696:
--

[~guozhang] This got delayed due to some internal works.  Will raise the first 
version PR by this month end.

> Kafka should be able to generate Hadoop delegation tokens
> -
>
> Key: KAFKA-1696
> URL: https://issues.apache.org/jira/browse/KAFKA-1696
> Project: Kafka
>  Issue Type: New Feature
>  Components: security
>Reporter: Jay Kreps
>Assignee: Parth Brahmbhatt
>
> For access from MapReduce/etc jobs run on behalf of a user.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-1499) Broker-side compression configuration

2017-07-12 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16083980#comment-16083980
 ] 

Manikumar commented on KAFKA-1499:
--

[~sludwig]  Yes, newly produced data will use latest compression.type. In 
compact mode, it is possible that old data may get compacted using new 
compression.type.

> Broker-side compression configuration
> -
>
> Key: KAFKA-1499
> URL: https://issues.apache.org/jira/browse/KAFKA-1499
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Joel Koshy
>Assignee: Manikumar
>  Labels: newbie++
> Fix For: 0.9.0.0
>
> Attachments: KAFKA-1499_2014-08-15_14:20:27.patch, 
> KAFKA-1499_2014-08-21_21:44:27.patch, KAFKA-1499_2014-09-21_15:57:23.patch, 
> KAFKA-1499_2014-09-23_14:45:38.patch, KAFKA-1499_2014-09-24_14:20:33.patch, 
> KAFKA-1499_2014-09-24_14:24:54.patch, KAFKA-1499_2014-09-25_11:05:57.patch, 
> KAFKA-1499_2014-10-27_13:13:55.patch, KAFKA-1499_2014-12-16_22:39:10.patch, 
> KAFKA-1499_2014-12-26_21:37:51.patch, KAFKA-1499.patch, KAFKA-1499.patch
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> A given topic can have messages in mixed compression codecs. i.e., it can
> also have a mix of uncompressed/compressed messages.
> It will be useful to support a broker-side configuration to recompress
> messages to a specific compression codec. i.e., all messages (for all
> topics) on the broker will be compressed to this codec. We could have
> per-topic overrides as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-1499) Broker-side compression configuration

2017-07-12 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084244#comment-16084244
 ] 

Manikumar commented on KAFKA-1499:
--

[~sludwig] yes, you can override on topic level for compression.type. You can 
use kafka-configs.sh script to set the topic level for compression.typec config.
{code} bin/kafka-configs.sh --zookeeper localhost:2181 --entity-type topics 
--entity-name my-topic
--alter --add-config compression.type=snappy
{code}

> Broker-side compression configuration
> -
>
> Key: KAFKA-1499
> URL: https://issues.apache.org/jira/browse/KAFKA-1499
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Joel Koshy
>Assignee: Manikumar
>  Labels: newbie++
> Fix For: 0.9.0.0
>
> Attachments: KAFKA-1499_2014-08-15_14:20:27.patch, 
> KAFKA-1499_2014-08-21_21:44:27.patch, KAFKA-1499_2014-09-21_15:57:23.patch, 
> KAFKA-1499_2014-09-23_14:45:38.patch, KAFKA-1499_2014-09-24_14:20:33.patch, 
> KAFKA-1499_2014-09-24_14:24:54.patch, KAFKA-1499_2014-09-25_11:05:57.patch, 
> KAFKA-1499_2014-10-27_13:13:55.patch, KAFKA-1499_2014-12-16_22:39:10.patch, 
> KAFKA-1499_2014-12-26_21:37:51.patch, KAFKA-1499.patch, KAFKA-1499.patch
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> A given topic can have messages in mixed compression codecs. i.e., it can
> also have a mix of uncompressed/compressed messages.
> It will be useful to support a broker-side configuration to recompress
> messages to a specific compression codec. i.e., all messages (for all
> topics) on the broker will be compressed to this codec. We could have
> per-topic overrides as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-4914) Partition re-assignment tool should check types before persisting state in ZooKeeper

2017-07-25 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-4914:
-
Fix Version/s: 1.0.0

> Partition re-assignment tool should check types before persisting state in 
> ZooKeeper
> 
>
> Key: KAFKA-4914
> URL: https://issues.apache.org/jira/browse/KAFKA-4914
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin
>Affects Versions: 0.10.1.1
>Reporter: Nick Travers
>Priority: Minor
> Fix For: 1.0.0
>
>
> The partition-reassignment too currently allows non-type-safe information to 
> be persisted into ZooKeeper, which can result in a ClassCastException at 
> runtime for brokers.
> Specifically, this occurred when the broker assignment field was a List of 
> Strings, instead of a List of Integers.
> {code}
> 2017-03-15 01:44:04,572 ERROR 
> [ZkClient-EventThread-36-samsa-zkserver.stage.sjc1.square:26101/samsa] 
> controller.ReplicaStateMachine$BrokerChangeListener - [BrokerChangeListener 
> on Controller 10]: Error while handling broker changes
> java.lang.ClassCastException: java.lang.String cannot be cast to 
> java.lang.Integer
> at scala.runtime.BoxesRunTime.unboxToInt(BoxesRunTime.java:101)
> at 
> kafka.controller.KafkaController$$anonfun$8$$anonfun$apply$2.apply(KafkaController.scala:436)
> at 
> scala.collection.LinearSeqOptimized$class.exists(LinearSeqOptimized.scala:93)
> at scala.collection.immutable.List.exists(List.scala:84)
> at 
> kafka.controller.KafkaController$$anonfun$8.apply(KafkaController.scala:436)
> at 
> kafka.controller.KafkaController$$anonfun$8.apply(KafkaController.scala:435)
> at 
> scala.collection.TraversableLike$$anonfun$filterImpl$1.apply(TraversableLike.scala:248)
> at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
> at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
> at 
> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
> at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
> at scala.collection.mutable.HashMap.foreach(HashMap.scala:99)
> at 
> scala.collection.TraversableLike$class.filterImpl(TraversableLike.scala:247)
> at 
> scala.collection.TraversableLike$class.filter(TraversableLike.scala:259)
> at scala.collection.AbstractTraversable.filter(Traversable.scala:104)
> at 
> kafka.controller.KafkaController.onBrokerStartup(KafkaController.scala:435)
> at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ReplicaStateMachine.scala:374)
> at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply(ReplicaStateMachine.scala:358)
> at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply(ReplicaStateMachine.scala:358)
> at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
> at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply$mcV$sp(ReplicaStateMachine.scala:357)
> at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply(ReplicaStateMachine.scala:356)
> at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply(ReplicaStateMachine.scala:356)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:234)
> at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener.handleChildChange(ReplicaStateMachine.scala:355)
> at org.I0Itec.zkclient.ZkClient$10.run(ZkClient.java:843)
> at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5644) Transient test failure: ResetConsumerGroupOffsetTest.testResetOffsetsToZonedDateTime

2017-07-26 Thread Manikumar (JIRA)
Manikumar created KAFKA-5644:


 Summary: Transient test failure: 
ResetConsumerGroupOffsetTest.testResetOffsetsToZonedDateTime
 Key: KAFKA-5644
 URL: https://issues.apache.org/jira/browse/KAFKA-5644
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.11.0.0
Reporter: Manikumar
Priority: Minor


{quote}
unit.kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToZonedDateTime 
FAILED
java.lang.AssertionError: Expected the consumer group to reset to when 
offset was 50.
at kafka.utils.TestUtils$.fail(TestUtils.scala:339)
at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:853)
at 
unit.kafka.admin.ResetConsumerGroupOffsetTest.testResetOffsetsToZonedDateTime(ResetConsumerGroupOffsetTest.scala:188)
{quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-5644) Transient test failure: ResetConsumerGroupOffsetTest.testResetOffsetsToZonedDateTime

2017-08-04 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-5644:


Assignee: Manikumar

> Transient test failure: 
> ResetConsumerGroupOffsetTest.testResetOffsetsToZonedDateTime
> 
>
> Key: KAFKA-5644
> URL: https://issues.apache.org/jira/browse/KAFKA-5644
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.11.0.0
>Reporter: Manikumar
>Assignee: Manikumar
>Priority: Minor
>
> {quote}
> unit.kafka.admin.ResetConsumerGroupOffsetTest > 
> testResetOffsetsToZonedDateTime FAILED
> java.lang.AssertionError: Expected the consumer group to reset to when 
> offset was 50.
> at kafka.utils.TestUtils$.fail(TestUtils.scala:339)
> at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:853)
> at 
> unit.kafka.admin.ResetConsumerGroupOffsetTest.testResetOffsetsToZonedDateTime(ResetConsumerGroupOffsetTest.scala:188)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5751) Kafka cannot start; corrupted index file(s)

2017-08-18 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16133441#comment-16133441
 ] 

Manikumar commented on KAFKA-5751:
--

Duplicate of KAFKA-5747

> Kafka cannot start; corrupted index file(s)
> ---
>
> Key: KAFKA-5751
> URL: https://issues.apache.org/jira/browse/KAFKA-5751
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.11.0.0
> Environment: Linux (RedHat 7)
>Reporter: Martin M
>Priority: Critical
>
> A system was running Kafka 0.11.0 and some applications that produce and 
> consume events.
> During the runtime, a power outage was experienced. Upon restart, Kafka did 
> not recover.
> Logs show repeatedly the messages below:
> *server.log*
> {noformat}
> [2017-08-15 15:02:26,374] FATAL [Kafka Server 1001], Fatal error during 
> KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
> org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
> 'version': java.nio.BufferUnderflowException
>   at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:75)
>   at 
> kafka.log.ProducerStateManager$.readSnapshot(ProducerStateManager.scala:289)
>   at 
> kafka.log.ProducerStateManager.loadFromSnapshot(ProducerStateManager.scala:440)
>   at 
> kafka.log.ProducerStateManager.truncateAndReload(ProducerStateManager.scala:499)
>   at kafka.log.Log.kafka$log$Log$$recoverSegment(Log.scala:327)
>   at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:314)
>   at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:272)
>   at 
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
>   at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
>   at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
>   at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
>   at kafka.log.Log.loadSegmentFiles(Log.scala:272)
>   at kafka.log.Log.loadSegments(Log.scala:376)
>   at kafka.log.Log.(Log.scala:179)
>   at kafka.log.Log$.apply(Log.scala:1580)
>   at 
> kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$5$$anonfun$apply$12$$anonfun$apply$1.apply$mcV$sp(LogManager.scala:172)
>   at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:57)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:748)
> {noformat}
> *kafkaServer.out*
> {noformat}
> [2017-08-15 16:03:50,927] WARN Found a corrupted index file due to 
> requirement failed: Corrupt index found, index file 
> (/opt/nsp/os/kafka/data/session-manager.revoke_token_topic-7/.index)
>  has non-zero size but the last offset is 0 which is no larger than the base 
> offset 0.}. deleting 
> /opt/nsp/os/kafka/data/session-manager.revoke_token_topic-7/.timeindex,
>  
> /opt/nsp/os/kafka/data/session-manager.revoke_token_topic-7/.index,
>  and 
> /opt/nsp/os/kafka/data/session-manager.revoke_token_topic-7/.txnindex
>  and rebuilding index... (kafka.log.Log)
> [2017-08-15 16:03:50,931] INFO [Kafka Server 1001], shutting down 
> (kafka.server.KafkaServer)
> [2017-08-15 16:03:50,932] INFO Recovering unflushed segment 0 in log 
> session-manager.revoke_token_topic-7. (kafka.log.Log)
> [2017-08-15 16:03:50,935] INFO Terminate ZkClient event thread. 
> (org.I0Itec.zkclient.ZkEventThread)
> [2017-08-15 16:03:50,936] INFO Loading producer state from offset 0 for 
> partition session-manager.revoke_token_topic-7 with message format version 2 
> (kafka.log.Log)
> [2017-08-15 16:03:50,937] INFO Completed load of log 
> session-manager.revoke_token_topic-7 with 1 log segments, log start offset 0 
> and log end offset 0 in 10 ms (kafka.log.Log)
> [2017-08-15 16:03:50,938] INFO Session: 0x1000f772d26063b closed 
> (org.apache.zookeeper.ZooKeeper)
> [2017-08-15 16:03:50,938] INFO EventThread shut down for session: 
> 0x1000f772d26063b (org.apache.zookeeper.ClientCnxn)
> {noformat}
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4889) 2G8lc

2017-08-16 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4889.
--
Resolution: Invalid

> 2G8lc
> -
>
> Key: KAFKA-4889
> URL: https://issues.apache.org/jira/browse/KAFKA-4889
> Project: Kafka
>  Issue Type: Task
>Reporter: Vamsi Jakkula
>
> Creating of an issue using project keys and issue type names using the REST 
> API



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4847) 1Y30J

2017-08-16 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4847.
--
Resolution: Invalid

> 1Y30J
> -
>
> Key: KAFKA-4847
> URL: https://issues.apache.org/jira/browse/KAFKA-4847
> Project: Kafka
>  Issue Type: Bug
>Reporter: Vamsi Jakkula
>
> Creating of an issue using project keys and issue type names using the REST 
> API



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4865) 2X8BF

2017-08-16 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4865.
--
Resolution: Invalid

> 2X8BF
> -
>
> Key: KAFKA-4865
> URL: https://issues.apache.org/jira/browse/KAFKA-4865
> Project: Kafka
>  Issue Type: Bug
>Reporter: Vamsi Jakkula
>
> Creating of an issue using project keys and issue type names using the REST 
> API



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4951) KafkaProducer may send duplicated message sometimes

2017-08-16 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4951.
--
Resolution: Fixed

This scenario is handled in the Idempotent producer (KIP-98) released in Kafka 
0.11.0.0.  Pl reopen if you think the issue still exists

> KafkaProducer may send duplicated message sometimes
> ---
>
> Key: KAFKA-4951
> URL: https://issues.apache.org/jira/browse/KAFKA-4951
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.1
>Reporter: cuiyang
>
> I foud that KafkaProducer may send duplicated message sometimes, which is 
> happend when:
>  In Sender thread:
>  NetworkClient::poll()
>  -> this.selector.poll()//send the message, such as "abc", and 
> send it to broker successfully
>  -> handleTimedOutRequests(responses,updatedNow);  //Judge whether 
> the message  "abc" which is sent above is expired or timeout,  and the judge 
> is  based on the parameter  this.requestTimeoutMs and updatedNow;  
>  -> response.request().callback().onComplete()
>  -> 
> completeBatch(batch,Errors.NETWORK_EXCEPTION,-1L,correlationId,now);   //If 
> themessage was judged as expired, then it will be reenqueued and send 
> repeatly next loop;
>  -> this.accumulator.reenqueue(batch,now);
> The problem comes out:  If the message "abc" is sent successfully to broker, 
> but it may be judged to expired, so the message will be sent repeately next 
> loop, which make the message duplicated.
> I can reproduce this scenario normally.
> In my opinion, I think Send::handleTimedOutRequests() is not much useful, 
> because the response of sending request from broker is succesfully and has no 
> error, which means brokers persist it successfully. And this function  will 
> induce to the duplicated message problems.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4813) 2h6R1

2017-08-16 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4813.
--
Resolution: Invalid

> 2h6R1
> -
>
> Key: KAFKA-4813
> URL: https://issues.apache.org/jira/browse/KAFKA-4813
> Project: Kafka
>  Issue Type: Bug
>Reporter: Vamsi Jakkula
>
> Creating of an issue using project keys and issue type names using the REST 
> API



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4803) OT6Y1

2017-08-16 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4803.
--
Resolution: Invalid

> OT6Y1
> -
>
> Key: KAFKA-4803
> URL: https://issues.apache.org/jira/browse/KAFKA-4803
> Project: Kafka
>  Issue Type: Bug
>Reporter: Vamsi Jakkula
>
> Creating of an issue using project keys and issue type names using the REST 
> API



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4804) TdOZY

2017-08-16 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4804.
--
Resolution: Invalid

> TdOZY
> -
>
> Key: KAFKA-4804
> URL: https://issues.apache.org/jira/browse/KAFKA-4804
> Project: Kafka
>  Issue Type: Bug
>Reporter: Vamsi Jakkula
>
> Creating of an issue using project keys and issue type names using the REST 
> API



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4821) 9244L

2017-08-16 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4821.
--
Resolution: Invalid

> 9244L
> -
>
> Key: KAFKA-4821
> URL: https://issues.apache.org/jira/browse/KAFKA-4821
> Project: Kafka
>  Issue Type: Task
>Reporter: Vamsi Jakkula
>
> Creating of an issue using project keys and issue type names using the REST 
> API



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-3796) SslTransportLayerTest.testInvalidEndpointIdentification fails on trunk

2017-08-16 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3796.
--
Resolution: Cannot Reproduce

 Pl reopen if you think the issue still exists

> SslTransportLayerTest.testInvalidEndpointIdentification fails on trunk
> --
>
> Key: KAFKA-3796
> URL: https://issues.apache.org/jira/browse/KAFKA-3796
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients, security
>Affects Versions: 0.10.0.0
>Reporter: Rekha Joshi
>Assignee: Rekha Joshi
>Priority: Trivial
>
> org.apache.kafka.common.network.SslTransportLayerTest > 
> testEndpointIdentificationDisabled FAILED
> java.net.BindException: Can't assign requested address
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
> at 
> org.apache.kafka.common.network.NioEchoServer.(NioEchoServer.java:48)
> at 
> org.apache.kafka.common.network.SslTransportLayerTest.testEndpointIdentificationDisabled(SslTransportLayerTest.java:120)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5737) KafkaAdminClient thread should be daemon

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5737.
--
   Resolution: Fixed
Fix Version/s: 1.0.0
   0.11.0.1

> KafkaAdminClient thread should be daemon
> 
>
> Key: KAFKA-5737
> URL: https://issues.apache.org/jira/browse/KAFKA-5737
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.11.0.0
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
> Fix For: 0.11.0.1, 1.0.0
>
>
> The admin client thread should be daemon, for consistency with the consumer 
> and producer threads.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-3322) recurring errors

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3322.
--
Resolution: Won't Fix

 Pl reopen if you think the issue still exists


> recurring errors
> 
>
> Key: KAFKA-3322
> URL: https://issues.apache.org/jira/browse/KAFKA-3322
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
> Environment: kafka0.9.0 and zookeeper 3.4.6
>Reporter: jackie
>
> we're getting hundreds of these errs with kafka 0.8 and topics become 
> unavailable after running for a few days.  it looks like this 
> https://issues.apache.org/jira/browse/KAFKA-1314



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-5744) ShellTest: add tests for attempting to run nonexistent program, error return

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-5744:


Assignee: Manikumar  (was: Colin P. McCabe)

> ShellTest: add tests for attempting to run nonexistent program, error return
> 
>
> Key: KAFKA-5744
> URL: https://issues.apache.org/jira/browse/KAFKA-5744
> Project: Kafka
>  Issue Type: Improvement
>  Components: unit tests
>Reporter: Colin P. McCabe
>Assignee: Manikumar
> Fix For: 1.0.0
>
>
> ShellTest should have tests for attempting to run nonexistent program, and 
> running a program which returns an error.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-5744) ShellTest: add tests for attempting to run nonexistent program, error return

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-5744:


Assignee: Colin P. McCabe  (was: Manikumar)

> ShellTest: add tests for attempting to run nonexistent program, error return
> 
>
> Key: KAFKA-5744
> URL: https://issues.apache.org/jira/browse/KAFKA-5744
> Project: Kafka
>  Issue Type: Improvement
>  Components: unit tests
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
> Fix For: 1.0.0
>
>
> ShellTest should have tests for attempting to run nonexistent program, and 
> running a program which returns an error.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5744) ShellTest: add tests for attempting to run nonexistent program, error return

2017-08-19 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134008#comment-16134008
 ] 

Manikumar commented on KAFKA-5744:
--

ShellTest.testRunProgramWithErrorReturn  is failing on my machine
cc [~cmccabe] 

java.lang.AssertionError
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at 
org.apache.kafka.common.utils.ShellTest.testRunProgramWithErrorReturn(ShellTest.java:70)

> ShellTest: add tests for attempting to run nonexistent program, error return
> 
>
> Key: KAFKA-5744
> URL: https://issues.apache.org/jira/browse/KAFKA-5744
> Project: Kafka
>  Issue Type: Improvement
>  Components: unit tests
>Reporter: Colin P. McCabe
>Assignee: Manikumar
> Fix For: 1.0.0
>
>
> ShellTest should have tests for attempting to run nonexistent program, and 
> running a program which returns an error.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4078) VIP for Kafka doesn't work

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4078.
--
Resolution: Cannot Reproduce

 Pl reopen if you think the issue still exists


> VIP for Kafka  doesn't work 
> 
>
> Key: KAFKA-4078
> URL: https://issues.apache.org/jira/browse/KAFKA-4078
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.1
>Reporter: chao
>
> We create VIP for chao007kfk002.chao007.com, 9092 ,chao007kfk003.chao007.com, 
> 9092 ,chao007kfk001.chao007.com, 9092
> But we found that Kafka client API has some issues ,  client send metadata 
> update will return three brokers ,  so it will create three connections for 
> 001 002 003 
> When we change VIP to  chao008kfk002.chao008.com, 9092 
> ,chao008kfk003.chao008.com, 9092 ,chao008kfk001.chao008.com, 9092
> it still produce data to 007 
> The following is log information  
> sasl.kerberos.ticket.renew.window.factor = 0.8
> bootstrap.servers = [kfk.chao.com:9092]
> client.id = 
> 2016-08-23 07:00:48,451:DEBUG kafka-producer-network-thread | producer-1 
> (NetworkClient.java:623) - Initialize connection to node -1 for sending 
> metadata request
> 2016-08-23 07:00:48,452:DEBUG kafka-producer-network-thread | producer-1 
> (NetworkClient.java:487) - Initiating connection to node -1 at 
> kfk.chao.com:9092.
> 2016-08-23 07:00:48,463:DEBUG kafka-producer-network-thread | producer-1 
> (Metrics.java:201) - Added sensor with name node--1.bytes-sent
>   
>   
> 2016-08-23 07:00:48,489:DEBUG kafka-producer-network-thread | producer-1 
> (NetworkClient.java:619) - Sending metadata request 
> ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=0,client_id=producer-1},
>  body={topics=[chao_vip]}), isInitiatedByNetworkClient, 
> createdTimeMs=1471935648465, sendTimeMs=0) to node -1
> 2016-08-23 07:00:48,512:DEBUG kafka-producer-network-thread | producer-1 
> (Metadata.java:172) - Updated cluster metadata version 2 to Cluster(nodes = 
> [Node(1, chao007kfk002.chao007.com, 9092), Node(2, chao007kfk003.chao007.com, 
> 9092), Node(0, chao007kfk001.chao007.com, 9092)], partitions = 
> [Partition(topic = chao_vip, partition = 0, leader = 0, replicas = [0,], isr 
> = [0,], Partition(topic = chao_vip, partition = 3, leader = 0, replicas = 
> [0,], isr = [0,], Partition(topic = chao_vip, partition = 2, leader = 2, 
> replicas = [2,], isr = [2,], Partition(topic = chao_vip, partition = 1, 
> leader = 1, replicas = [1,], isr = [1,], Partition(topic = chao_vip, 
> partition = 4, leader = 1, replicas = [1,], isr = [1,]])



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2053) Make initZk a protected function

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2053.
--
Resolution: Won't Fix

 Pl reopen if you think the requirement still exists

> Make initZk a protected function
> 
>
> Key: KAFKA-2053
> URL: https://issues.apache.org/jira/browse/KAFKA-2053
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.8.2.0
>Reporter: Christian Kampka
>Priority: Minor
> Attachments: make-initzk-protected
>
>
> In our environment, we have established an external procedure to notify 
> clients of changes in the zookeeper cluster configuration, especially 
> appearance and disappearance of nodes. it has also become quite common to run 
> Kafka as an embedded service (especially in tests).
> When doing so, it would makes things easier if it were possible to manipulate 
> the creation of the zookeeper client to supply Kafka with a specialized 
> ZooKeeper client that is adjusted to our needs but of course API compatible 
> with the ZkClient.
> Therefore, I would like to propose to make the initZk method protected so we 
> will be able to simply override it for client creation. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-1495) Kafka Example SimpleConsumerDemo

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1495.
--
Resolution: Won't Fix

> Kafka Example SimpleConsumerDemo 
> -
>
> Key: KAFKA-1495
> URL: https://issues.apache.org/jira/browse/KAFKA-1495
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.1.1
> Environment: Mac OS
>Reporter: darion yaphet
>Assignee: Jun Rao
>
> Offical SimpleConsumerDemo  under 
> kafka-0.8.1.1-src/examples/src/main/java/kafka/examples  running on my 
> machine . I found  under /tmp/kafka-logs has two directory  topic2-0 and 
> topic2-1  and 
> one is empty 
> ➜  kafka-logs  ls -lF  topic2-0  topic2-1
> topic2-0:
> total 21752
> -rw-r--r--  1 2011204  wheel  10485760  6 17 17:34 .index
> -rw-r--r--  1 2011204  wheel651109  6 17 18:44 .log
> topic2-1:
> total 20480
> -rw-r--r--  1 2011204  wheel  10485760  6 17 17:34 .index
> -rw-r--r--  1 2011204  wheel 0  6 17 17:34 .log 
> Is it a bug  or  something should  config in source code?
> thank you 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2231) Deleting a topic fails

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2231.
--
Resolution: Cannot Reproduce

Topic deletion is more stable in latest releases. Pl reopen if you think the 
issue still exists

> Deleting a topic fails
> --
>
> Key: KAFKA-2231
> URL: https://issues.apache.org/jira/browse/KAFKA-2231
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1
> Environment: Windows 8.1
>Reporter: James G. Haberly
>Priority: Minor
>
> delete.topic.enable=true is in config\server.properties.
> Using --list shows the topic "marked for deletion".
> Stopping and restarting kafka and zookeeper does not delete the topic; it 
> remains "marked for deletion".
> Trying to recreate the topic fails with "Topic XXX already exists".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-3953) start kafka fail

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3953.
--
Resolution: Cannot Reproduce

 Pl reopen if you think the issue still exists


> start kafka fail
> 
>
> Key: KAFKA-3953
> URL: https://issues.apache.org/jira/browse/KAFKA-3953
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.2
> Environment: Linux host-172-28-0-3 3.10.0-327.18.2.el7.x86_64 #1 SMP 
> Thu May 12 11:03:55 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: ffh
>
> kafka start fail. error messege:
> [2016-07-12 03:57:32,717] FATAL [Kafka Server 0], Fatal error during 
> KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
> java.util.NoSuchElementException: None.get
>   at scala.None$.get(Option.scala:313)
>   at scala.None$.get(Option.scala:311)
>   at kafka.controller.KafkaController.clientId(KafkaController.scala:215)
>   at 
> kafka.controller.ControllerBrokerRequestBatch.(ControllerChannelManager.scala:189)
>   at 
> kafka.controller.PartitionStateMachine.(PartitionStateMachine.scala:48)
>   at kafka.controller.KafkaController.(KafkaController.scala:156)
>   at kafka.server.KafkaServer.startup(KafkaServer.scala:148)
>   at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:29)
>   at kafka.Kafka$.main(Kafka.scala:72)
>   at kafka.Kafka.main(Kafka.scala)
> [2016-07-12 03:57:33,124] FATAL Fatal error during KafkaServerStartable 
> startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
> java.util.NoSuchElementException: None.get
>   at scala.None$.get(Option.scala:313)
>   at scala.None$.get(Option.scala:311)
>   at kafka.controller.KafkaController.clientId(KafkaController.scala:215)
>   at 
> kafka.controller.ControllerBrokerRequestBatch.(ControllerChannelManager.scala:189)
>   at 
> kafka.controller.PartitionStateMachine.(PartitionStateMachine.scala:48)
>   at kafka.controller.KafkaController.(KafkaController.scala:156)
>   at kafka.server.KafkaServer.startup(KafkaServer.scala:148)
>   at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:29)
>   at kafka.Kafka$.main(Kafka.scala:72)
>   at kafka.Kafka.main(Kafka.scala)
> config:
> # Generated by Apache Ambari. Tue Jul 12 03:18:02 2016
> authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
> auto.create.topics.enable=true
> auto.leader.rebalance.enable=true
> broker.id=0
> compression.type=producer
> controlled.shutdown.enable=true
> controlled.shutdown.max.retries=3
> controlled.shutdown.retry.backoff.ms=5000
> controller.message.queue.size=10
> controller.socket.timeout.ms=3
> default.replication.factor=1
> delete.topic.enable=false
> external.kafka.metrics.exclude.prefix=kafka.network.RequestMetrics,kafka.server.DelayedOperationPurgatory,kafka.server.BrokerTopicMetrics.BytesRejectedPerSec
> external.kafka.metrics.include.prefix=kafka.network.RequestMetrics.ResponseQueueTimeMs.request.OffsetCommit.98percentile,kafka.network.RequestMetrics.ResponseQueueTimeMs.request.Offsets.95percentile,kafka.network.RequestMetrics.ResponseSendTimeMs.request.Fetch.95percentile,kafka.network.RequestMetrics.RequestsPerSec.request
> fetch.purgatory.purge.interval.requests=1
> kafka.ganglia.metrics.group=kafka
> kafka.ganglia.metrics.host=localhost
> kafka.ganglia.metrics.port=8671
> kafka.ganglia.metrics.reporter.enabled=true
> kafka.metrics.reporters=
> kafka.timeline.metrics.host=
> kafka.timeline.metrics.maxRowCacheSize=1
> kafka.timeline.metrics.port=
> kafka.timeline.metrics.reporter.enabled=true
> kafka.timeline.metrics.reporter.sendInterval=5900
> leader.imbalance.check.interval.seconds=300
> leader.imbalance.per.broker.percentage=10
> listeners=PLAINTEXT://host-172-28-0-3:6667
> log.cleanup.interval.mins=10
> log.dirs=/kafka-logs
> log.index.interval.bytes=4096
> log.index.size.max.bytes=10485760
> log.retention.bytes=-1
> log.retention.hours=168
> log.roll.hours=168
> log.segment.bytes=1073741824
> message.max.bytes=100
> min.insync.replicas=1
> num.io.threads=8
> num.network.threads=3
> num.partitions=1
> num.recovery.threads.per.data.dir=1
> num.replica.fetchers=1
> offset.metadata.max.bytes=4096
> offsets.commit.required.acks=-1
> offsets.commit.timeout.ms=5000
> offsets.load.buffer.size=5242880
> offsets.retention.check.interval.ms=60
> offsets.retention.minutes=8640
> offsets.topic.compression.codec=0
> offsets.topic.num.partitions=50
> offsets.topic.replication.factor=3
> offsets.topic.segment.bytes=104857600
> principal.to.local.class=kafka.security.auth.KerberosPrincipalToLocal
> producer.purgatory.purge.interval.requests=1
> queued.max.requests=500
> replica.fetch.max.bytes=1048576
> replica.fetch.min.bytes=1
> replica.fetch.wait.max.ms=500
> 

[jira] [Resolved] (KAFKA-2093) Remove logging error if we throw exception

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2093.
--
Resolution: Won't Fix

Scala producer is deprecated. Pl reopen if you think the issue still exists


> Remove logging error if we throw exception
> --
>
> Key: KAFKA-2093
> URL: https://issues.apache.org/jira/browse/KAFKA-2093
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ivan Balashov
>Priority: Trivial
>
> On failure, kafka producer logs error AND throws exception. This can pose 
> problems, since client application cannot flexibly control if a particular 
> exception should be logged, and logging becomes all-or-nothing choice for 
> particular logger.
> We must remove logging error if we decide to throw exception.
> Some examples of this:
> kafka.client.ClientUtils$:89
> kafka.producer.SyncProducer:103
> If no one has objections, I can search around for other cases of logging + 
> throwing which should also be fixed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-3327) Warning from kafka mirror maker about ssl properties not valid

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3327.
--
Resolution: Cannot Reproduce

mostly related to config issue.  Pl reopen if you think the issue still exists


> Warning from kafka mirror maker about ssl properties not valid
> --
>
> Key: KAFKA-3327
> URL: https://issues.apache.org/jira/browse/KAFKA-3327
> Project: Kafka
>  Issue Type: Test
>  Components: config
>Affects Versions: 0.9.0.1
> Environment: CentOS release 6.5
>Reporter: Munir Khan
>Priority: Minor
>  Labels: kafka, mirror-maker, ssl
>
> I am trying to run Mirror maker  over SSL. I have configured my broker 
> following the procedure described in this document 
> http://kafka.apache.org/documentation.html#security_overview 
> I get the following warning when I start the mirror maker:
> [root@munkhan-kafka1 kafka_2.10-0.9.0.1]# bin/kafka-run-class.sh 
> kafka.tools.MirrorMaker --consumer.config 
> config/datapush-consumer-ssl.properties --producer.config 
> config/datapush-producer-ssl.properties --num.streams 2 --whitelist test1&
> [1] 4701
> [root@munkhan-kafka1 kafka_2.10-0.9.0.1]# [2016-03-03 10:24:35,348] WARN 
> block.on.buffer.full config is deprecated and will be removed soon. Please 
> use max.block.ms (org.apache.kafka.clients.producer.KafkaProducer)
> [2016-03-03 10:24:35,523] WARN The configuration producer.type = sync was 
> supplied but isn't a known config. 
> (org.apache.kafka.clients.producer.ProducerConfig)
> [2016-03-03 10:24:35,523] WARN The configuration ssl.keypassword = test1234 
> was supplied but isn't a known config. 
> (org.apache.kafka.clients.producer.ProducerConfig)
> [2016-03-03 10:24:35,523] WARN The configuration compression.codec = none was 
> supplied but isn't a known config. 
> (org.apache.kafka.clients.producer.ProducerConfig)
> [2016-03-03 10:24:35,523] WARN The configuration serializer.class = 
> kafka.serializer.DefaultEncoder was supplied but isn't a known config. 
> (org.apache.kafka.clients.producer.ProducerConfig)
> [2016-03-03 10:24:35,617] WARN Property security.protocol is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:35,617] WARN Property ssl.keypassword is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:35,617] WARN Property ssl.keystore.location is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:35,618] WARN Property ssl.keystore.password is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:35,618] WARN Property ssl.truststore.location is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:35,618] WARN Property ssl.truststore.password is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:35,752] WARN Property security.protocol is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:35,752] WARN Property ssl.keypassword is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:35,752] WARN Property ssl.keystore.location is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:35,752] WARN Property ssl.keystore.password is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:35,752] WARN Property ssl.truststore.location is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:35,753] WARN Property ssl.truststore.password is not valid 
> (kafka.utils.VerifiableProperties)
> [2016-03-03 10:24:36,251] WARN No broker partitions consumed by consumer 
> thread test-consumer-group_munkhan-kafka1.cisco.com-1457018675755-b9bb4c75-0 
> for topic test1 (kafka.consumer.RangeAssignor)
> [2016-03-03 10:24:36,251] WARN No broker partitions consumed by consumer 
> thread test-consumer-group_munkhan-kafka1.cisco.com-1457018675755-b9bb4c75-0 
> for topic test1 (kafka.consumer.RangeAssignor)
> However the Mirror maker is able to mirror data . If I remove the 
> configurations related to the warning messages from my producer  mirror maker 
> does not work . So it seems despite the warning shown above the 
> ssl.configuration properties are used somehow. 
> My question is these are those warnings harmless in this context ?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-3951) kafka.common.KafkaStorageException: I/O exception in append to log

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3951.
--
Resolution: Cannot Reproduce

 Pl reopen if you think the issue still exists


> kafka.common.KafkaStorageException: I/O exception in append to log
> --
>
> Key: KAFKA-3951
> URL: https://issues.apache.org/jira/browse/KAFKA-3951
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.9.0.1
>Reporter: wanzi.zhao
> Attachments: server-1.properties, server.properties
>
>
> I have two brokers in the same server using two ports,10.45.33.195:9092 and 
> 10.45.33.195:9093.They use two log directory "log.dirs=/tmp/kafka-logs" and 
> "log.dirs=/tmp/kafka-logs-1".When I shutdown my consumer application(java 
> api)  then change a groupId and restart it,my kafka brokers will stop 
> working, this is the stack trace I get
> [2016-07-11 17:02:47,314] INFO [Group Metadata Manager on Broker 0]: Loading 
> offsets and group metadata from [__consumer_offsets,0] 
> (kafka.coordinator.GroupMetadataManager)
> [2016-07-11 17:02:47,955] FATAL [Replica Manager on Broker 0]: Halting due to 
> unrecoverable I/O error while handling produce request:  
> (kafka.server.ReplicaManager)
> kafka.common.KafkaStorageException: I/O exception in append to log 
> '__consumer_offsets-38'
> at kafka.log.Log.append(Log.scala:318)
> at kafka.cluster.Partition$$anonfun$9.apply(Partition.scala:442)
> at kafka.cluster.Partition$$anonfun$9.apply(Partition.scala:428)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
> at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:268)
> at kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:428)
> at 
> kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:401)
> at 
> kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:386)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
> at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
> at scala.collection.AbstractTraversable.map(Traversable.scala:105)
> at 
> kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:386)
> at 
> kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:322)
> at 
> kafka.coordinator.GroupMetadataManager.store(GroupMetadataManager.scala:228)
> at 
> kafka.coordinator.GroupCoordinator$$anonfun$handleCommitOffsets$9.apply(GroupCoordinator.scala:429)
> at 
> kafka.coordinator.GroupCoordinator$$anonfun$handleCommitOffsets$9.apply(GroupCoordinator.scala:429)
> at scala.Option.foreach(Option.scala:236)
> at 
> kafka.coordinator.GroupCoordinator.handleCommitOffsets(GroupCoordinator.scala:429)
> at 
> kafka.server.KafkaApis.handleOffsetCommitRequest(KafkaApis.scala:280)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:76)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.FileNotFoundException: 
> /tmp/kafka-logs/__consumer_offsets-38/.index (No such 
> file or directory)
> at java.io.RandomAccessFile.open0(Native Method)
> at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
> at java.io.RandomAccessFile.(RandomAccessFile.java:243)
> at 
> kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:277)
> at 
> kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:276)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
> at kafka.log.OffsetIndex.resize(OffsetIndex.scala:276)
> at 
> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply$mcV$sp(OffsetIndex.scala:265)
> at 
> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply(OffsetIndex.scala:265)
> at 
> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply(OffsetIndex.scala:265)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
> at kafka.log.OffsetIndex.trimToValidSize(OffsetIndex.scala:264)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2296) Not able to delete topic on latest kafka

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2296.
--
Resolution: Duplicate

> Not able to delete topic on latest kafka
> 
>
> Key: KAFKA-2296
> URL: https://issues.apache.org/jira/browse/KAFKA-2296
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1
>Reporter: Andrew M
>
> Was able to reproduce [inability to delete 
> topic|https://issues.apache.org/jira/browse/KAFKA-1397?focusedCommentId=14491442=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14491442]
>  on running cluster with kafka 0.8.2.1.
> Cluster consist from 2 c3.xlarge aws instances with sufficient storage 
> attached. All communication between nodes goes through aws vpc
> Some warns from logs:
> {noformat}[Controller-1234-to-broker-4321-send-thread], Controller 1234 epoch 
> 20 fails to send request 
> Name:UpdateMetadataRequest;Version:0;Controller:1234;ControllerEpoch:20;CorrelationId:24047;ClientId:id_1234-host_1.2.3.4-port_6667;AliveBrokers:id:1234,host:1.2.3.4,port:6667,id:4321,host:4.3.2.1,port:6667;PartitionState:[topic_name,45]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:4321,1234,LeaderEpoch:0,ControllerEpoch:19),ReplicationFactor:2),AllReplicas:1234,4321),[topic_name,27]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:4321,1234,LeaderEpoch:0,ControllerEpoch:19),ReplicationFactor:2),AllReplicas:1234,4321),[topic_name,17]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:4321,1234,LeaderEpoch:0,ControllerEpoch:19),ReplicationFactor:2),AllReplicas:1234,4321),[topic_name,49]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:4321,1234,LeaderEpoch:0,ControllerEpoch:19),ReplicationFactor:2),AllReplicas:1234,4321),[topic_name,7]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:4321,1234,LeaderEpoch:0,ControllerEpoch:19),ReplicationFactor:2),AllReplicas:1234,4321),[topic_name,26]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,62]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,18]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,36]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,29]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:4321,1234,LeaderEpoch:0,ControllerEpoch:19),ReplicationFactor:2),AllReplicas:1234,4321),[topic_name,53]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:4321,1234,LeaderEpoch:0,ControllerEpoch:19),ReplicationFactor:2),AllReplicas:1234,4321),[topic_name,52]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,2]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,12]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,33]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:4321,1234,LeaderEpoch:0,ControllerEpoch:19),ReplicationFactor:2),AllReplicas:1234,4321),[topic_name,14]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,63]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:4321,1234,LeaderEpoch:0,ControllerEpoch:19),ReplicationFactor:2),AllReplicas:1234,4321),[topic_name,30]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,6]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,28]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,38]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,24]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,31]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:4321,1234,LeaderEpoch:0,ControllerEpoch:19),ReplicationFactor:2),AllReplicas:1234,4321),[topic_name,4]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,20]
>  -> 
> (LeaderAndIsrInfo:(Leader:-2,ISR:1234,4321,LeaderEpoch:0,ControllerEpoch:20),ReplicationFactor:2),AllReplicas:4321,1234),[topic_name,54]
>  -> 
> 

[jira] [Resolved] (KAFKA-2289) KafkaProducer logs erroneous warning on startup

2017-08-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2289.
--
Resolution: Fixed

This has been fixed.

> KafkaProducer logs erroneous warning on startup
> ---
>
> Key: KAFKA-2289
> URL: https://issues.apache.org/jira/browse/KAFKA-2289
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.8.2.1
>Reporter: Henning Schmiedehausen
>Priority: Trivial
>
> When creating a new KafkaProducer using the 
> KafkaProducer(KafkaConfig, Serializer, Serializer) constructor, Kafka 
> will list the following lines, which are harmless but are still at WARN level:
> WARN  [2015-06-19 23:13:56,557] 
> org.apache.kafka.clients.producer.ProducerConfig: The configuration 
> value.serializer = class  was supplied but isn't a known config.
> WARN  [2015-06-19 23:13:56,557] 
> org.apache.kafka.clients.producer.ProducerConfig: The configuration 
> key.serializer = class  was supplied but isn't a known config.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-3927) kafka broker config docs issue

2017-08-22 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3927.
--
Resolution: Later

Yes, These changes are done in KAFKA-615.  Please reopen if the issue still 
exists. 


> kafka broker config docs issue
> --
>
> Key: KAFKA-3927
> URL: https://issues.apache.org/jira/browse/KAFKA-3927
> Project: Kafka
>  Issue Type: Bug
>  Components: website
>Affects Versions: 0.10.0.0
>Reporter: Shawn Guo
>Priority: Minor
>
> https://kafka.apache.org/documentation.html#brokerconfigs
> log.flush.interval.messages 
> default value is "9223372036854775807"
> log.flush.interval.ms 
> default value is null
> log.flush.scheduler.interval.ms 
> default value is "9223372036854775807"
> etc. obviously these default values are incorrect. how these doc get 
> generated ? it looks confusing. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-3473) Add controller channel manager request queue time metric.

2017-08-22 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137281#comment-16137281
 ] 

Manikumar commented on KAFKA-3473:
--

@ijuma Is this covered in KAFKA-5135/KIP-143?

> Add controller channel manager request queue time metric.
> -
>
> Key: KAFKA-3473
> URL: https://issues.apache.org/jira/browse/KAFKA-3473
> Project: Kafka
>  Issue Type: Improvement
>  Components: controller
>Affects Versions: 0.10.0.0
>Reporter: Jiangjie Qin
>Assignee: Dong Lin
>
> Currently controller appends the requests to brokers into controller channel 
> manager queue during state transition. i.e. the state transition are 
> propagated asynchronously. We need to track the request queue time on the 
> controller side to see how long the state propagation is delayed after the 
> state transition finished on the controller.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-3800) java client can`t poll msg

2017-08-22 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3800.
--
Resolution: Cannot Reproduce

 Please reopen if the issue still exists. 


> java client can`t poll msg
> --
>
> Key: KAFKA-3800
> URL: https://issues.apache.org/jira/browse/KAFKA-3800
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.1
> Environment: java8,win7 64
>Reporter: frank
>Assignee: Neha Narkhede
>
> i use hump topic name, after poll msg is null.eg: Test_4 why?
> all low char is ok. i`m try nodejs,kafka-console-consumers.bat is ok



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-5547) Return topic authorization failed if no topic describe access

2017-08-22 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-5547:


Assignee: Manikumar

> Return topic authorization failed if no topic describe access
> -
>
> Key: KAFKA-5547
> URL: https://issues.apache.org/jira/browse/KAFKA-5547
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Manikumar
>  Labels: security, usability
> Fix For: 1.0.0
>
>
> We previously made a change to several of the request APIs to return 
> UNKNOWN_TOPIC_OR_PARTITION if the principal does not have Describe access to 
> the topic. The thought was to avoid leaking information about which topics 
> exist. The problem with this is that a client which sees this error will just 
> keep retrying because it is usually treated as retriable. It seems, however, 
> that we could return TOPIC_AUTHORIZATION_FAILED instead and still avoid 
> leaking information as long as we ensure that the Describe authorization 
> check comes before the topic existence check. This would avoid the ambiguity 
> on the client.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4823) Creating Kafka Producer on application running on Java older version

2017-08-22 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4823.
--
Resolution: Won't Fix

Kafka Dropped support for Java 1.6 from 0.9 release. You can try Rest 
Proxy/Other language libraries.  Please reopen if you think otherwise

> Creating Kafka Producer on application running on Java older version
> 
>
> Key: KAFKA-4823
> URL: https://issues.apache.org/jira/browse/KAFKA-4823
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.1
> Environment: Application running on Java 1.6
>Reporter: live2code
>
> I have an application running on Java 1.6 which cannot be upgraded.This 
> application need to have interfaces to post (producer )messages to Kafka 
> server remote box.Also receive messages as consumer .The code runs fine from 
> my local env which is on java 1.7.But the same producer and consumer fails 
> when executed within the application with the error
> Caused by: java.lang.UnsupportedClassVersionError: JVMCFRE003 bad major 
> version; class=org/apache/kafka/clients/producer/KafkaProducer, offset=6
> Is there someway I can still do it ?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5401) Kafka Over TLS Error - Failed to send SSL Close message - Broken Pipe

2017-08-22 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5401.
--
Resolution: Duplicate

> Kafka Over TLS Error - Failed to send SSL Close message - Broken Pipe
> -
>
> Key: KAFKA-5401
> URL: https://issues.apache.org/jira/browse/KAFKA-5401
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.1.1
> Environment: SLES 11 , Kakaf Over TLS 
>Reporter: PaVan
>  Labels: security
>
> SLES 11 
> WARN Failed to send SSL Close message 
> (org.apache.kafka.common.network.SslTransportLayer)
> java.io.IOException: Broken pipe
>  at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
>  at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
>  at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
>  at sun.nio.ch.IOUtil.write(IOUtil.java:65)
>  at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471)
>  at 
> org.apache.kafka.common.network.SslTransportLayer.flush(SslTransportLayer.java:194)
>  at 
> org.apache.kafka.common.network.SslTransportLayer.close(SslTransportLayer.java:148)
>  at 
> org.apache.kafka.common.network.KafkaChannel.close(KafkaChannel.java:45)
>  at org.apache.kafka.common.network.Selector.close(Selector.java:442)
>  at org.apache.kafka.common.network.Selector.poll(Selector.java:310)
>  at 
> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:256)
>  at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:216)
>  at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:128)
>  at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-3653) expose the queue size in ControllerChannelManager

2017-08-22 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3653.
--
Resolution: Fixed

Fixed in KAFKA-5135/KIP-143

> expose the queue size in ControllerChannelManager
> -
>
> Key: KAFKA-3653
> URL: https://issues.apache.org/jira/browse/KAFKA-3653
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jun Rao
>Assignee: Gwen Shapira
>
> Currently, ControllerChannelManager maintains a queue per broker. If the 
> queue fills up, metadata propagation to the broker is delayed. It would be 
> useful to expose a metric on the size on the queue for monitoring.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5590) Delete Kafka Topic Complete Failed After Enable Ranger Kafka Plugin

2017-08-23 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16138257#comment-16138257
 ] 

Manikumar commented on KAFKA-5590:
--

Current TopicCommand uses Zookeeper based deletion.  This does not involve any 
Kafka Acls.  In Kerberos setup,  only broker user/principal can write to the 
ZK.  So, you need boker Pricipal credentials to make changes to ZK. 


> Delete Kafka Topic Complete Failed After Enable Ranger Kafka Plugin
> ---
>
> Key: KAFKA-5590
> URL: https://issues.apache.org/jira/browse/KAFKA-5590
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.10.0.0
> Environment: kafka and ranger under ambari
>Reporter: Chaofeng Zhao
>
> Hi:
> Recently I develop some applications about kafka under ranger. But when I 
> set enable ranger kafka plugin I can not delete kafka topic completely even 
> though set 'delete.topic.enable=true'. And I find when enable ranger kafka 
> plugin it must be authrized. How can I delete kafka topic completely under 
> ranger. Thank you.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (KAFKA-5590) Delete Kafka Topic Complete Failed After Enable Ranger Kafka Plugin

2017-08-23 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16138257#comment-16138257
 ] 

Manikumar edited comment on KAFKA-5590 at 8/23/17 11:56 AM:


Current TopicCommand uses Zookeeper based deletion.  This does not involve any 
Kafka Acls.  In Kerberos setup,  only broker user/principal can write to the 
ZK.  So, you need broker Principal credentials to make changes to ZK. 



was (Author: omkreddy):
Current TopicCommand uses Zookeeper based deletion.  This does not involve any 
Kafka Acls.  In Kerberos setup,  only broker user/principal can write to the 
ZK.  So, you need boker Pricipal credentials to make changes to ZK. 


> Delete Kafka Topic Complete Failed After Enable Ranger Kafka Plugin
> ---
>
> Key: KAFKA-5590
> URL: https://issues.apache.org/jira/browse/KAFKA-5590
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.10.0.0
> Environment: kafka and ranger under ambari
>Reporter: Chaofeng Zhao
>
> Hi:
> Recently I develop some applications about kafka under ranger. But when I 
> set enable ranger kafka plugin I can not delete kafka topic completely even 
> though set 'delete.topic.enable=true'. And I find when enable ranger kafka 
> plugin it must be authrized. How can I delete kafka topic completely under 
> ranger. Thank you.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4454) Authorizer should also include the Principal generated by the PrincipalBuilder.

2017-09-15 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4454.
--
Resolution: Fixed

This is covered in KIP-189/KAFKA-5783

> Authorizer should also include the Principal generated by the 
> PrincipalBuilder.
> ---
>
> Key: KAFKA-4454
> URL: https://issues.apache.org/jira/browse/KAFKA-4454
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.1
>Reporter: Mayuresh Gharat
>Assignee: Mayuresh Gharat
>
> Currently kafka allows users to plugin a custom PrincipalBuilder and a custom 
> Authorizer.
> The Authorizer.authorize() object takes in a Session object that wraps 
> KafkaPrincipal and InetAddress.
> The KafkaPrincipal currently has a PrincipalType and Principal name, which is 
> the name of Principal generated by the PrincipalBuilder. 
> This Principal, generated by the pluggedin PrincipalBuilder might have other 
> fields that might be required by the pluggedin Authorizer but currently we 
> loose this information since we only extract the name of Principal while 
> creating KaflkaPrincipal in SocketServer.  
> It would be great if KafkaPrincipal has an additional field 
> "channelPrincipal" which is used to store the Principal generated by the 
> plugged in PrincipalBuilder.
> The pluggedin Authorizer can then use this "channelPrincipal" to do 
> authorization.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-3722) PlaintextChannelBuilder should not use ChannelBuilders.createPrincipalBuilder(configs) for creating instance of PrincipalBuilder

2017-09-15 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16167717#comment-16167717
 ] 

Manikumar commented on KAFKA-3722:
--

[~mgharat] I think KIP-189/KAFKA-5783 addresses this issue. If you agree,  we 
can close this issue.

> PlaintextChannelBuilder should not use 
> ChannelBuilders.createPrincipalBuilder(configs) for creating instance of 
> PrincipalBuilder
> 
>
> Key: KAFKA-3722
> URL: https://issues.apache.org/jira/browse/KAFKA-3722
> Project: Kafka
>  Issue Type: Bug
>Reporter: Mayuresh Gharat
>Assignee: Mayuresh Gharat
>
> Consider this scenario :
> 1) We have a Kafka Broker running on  PlainText and SSL port simultaneously.
> 2)  We try to plugin a custom principal builder using the config 
> "principal.builder.class" for the request coming over the SSL port.
> 3) The ChannelBuilders.createPrincipalBuilder(configs) first checks if a 
> config "principal.builder.class" is specified in the passed in configs and 
> tries to use that even when it is building the instance of PrincipalBuilder 
> for the PlainText port, when that custom principal class is only menat for 
> SSL port.
> IMO, having a DefaultPrincipalBuilder for PalinText port should be fine.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5750) Elevate log messages for denials to INFO in SimpleAclAuthorizer class

2017-09-14 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-5750:
-
Summary: Elevate log messages for denials to INFO in SimpleAclAuthorizer 
class  (was: Elevate log messages for denials to WARN in SimpleAclAuthorizer 
class)

> Elevate log messages for denials to INFO in SimpleAclAuthorizer class
> -
>
> Key: KAFKA-5750
> URL: https://issues.apache.org/jira/browse/KAFKA-5750
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Reporter: Phillip Walker
>Assignee: Manikumar
> Fix For: 1.0.0
>
>
> Currently, the authorizer logs all messages at DEBUG level and logs every 
> single authorization attempt, which can greatly decrease cluster performance, 
> especially when Mirrormaker also produces to that cluster. Many InfoSec 
> requirements, though, require that authorization denials be logged. The 
> proposed solution is to elevate any denial in SimpleAclAuthorizer and any 
> other relevant class to WARN while leaving approvals at their currently 
> logging levels.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5407) Mirrormaker dont start after upgrade

2017-09-14 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16167363#comment-16167363
 ] 

Manikumar commented on KAFKA-5407:
--

[~fvegaucr] Can you upload broker logs at the time of error? 

> Mirrormaker dont start after upgrade
> 
>
> Key: KAFKA-5407
> URL: https://issues.apache.org/jira/browse/KAFKA-5407
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.10.2.1
> Environment: Operating system
> CentOS 6.8
> HW
> Board Mfg : HP
>  Board Product : ProLiant DL380p Gen8
> CPU's x2
> Product Manufacturer  : Intel
>  Product Name  :  Intel(R) Xeon(R) CPU E5-2660 v2 @ 2.20GHz
>  Memory Type   : DDR3 SDRAM
>  SDRAM Capacity: 2048 MB
>  Total Memory: : 64GB
> Hardrives size and layout:
> 9 drives using jbod
> drive size 3.6TB each
>Reporter: Fernando Vega
>Priority: Critical
> Attachments: mirrormaker-repl-sjc2-to-hkg1.log.8
>
>
> Currently Im upgrading the cluster from 0.8.2-beta to 0.10.2.1
> So I followed the rolling procedure:
> Here the config files:
> Consumer
> {noformat}
> #
> # Cluster: repl
> # Topic list(goes into command line): 
> REPL-ams1-global,REPL-atl1-global,REPL-sjc2-global,REPL-ams1-global-PN_HXIDMAP_.*,REPL-atl1-global-PN_HXIDMAP_.*,REPL-sjc2-global-PN_HXIDMAP_.*,REPL-ams1-global-PN_HXCONTEXTUALV2_.*,REPL-atl1-global-PN_HXCONTEXTUALV2_.*,REPL-sjc2-global-PN_HXCONTEXTUALV2_.*
> bootstrap.servers=app001:9092,app002:9092,app003:9092,app004:9092
> group.id=hkg1_cluster
> auto.commit.interval.ms=6
> partition.assignment.strategy=org.apache.kafka.clients.consumer.RoundRobinAssignor
> {noformat}
> Producer
> {noformat}
>  hkg1
> # # Producer
> # # hkg1
> bootstrap.servers=app001:9092,app002:9092,app003:9092,app004:9092
> compression.type=gzip
> acks=0
> {noformat}
> Broker
> {noformat}
> auto.leader.rebalance.enable=true
> delete.topic.enable=true
> socket.receive.buffer.bytes=1048576
> socket.send.buffer.bytes=1048576
> default.replication.factor=2
> auto.create.topics.enable=true
> num.partitions=1
> num.network.threads=8
> num.io.threads=40
> log.retention.hours=1
> log.roll.hours=1
> num.replica.fetchers=8
> zookeeper.connection.timeout.ms=3
> zookeeper.session.timeout.ms=3
> inter.broker.protocol.version=0.10.2
> log.message.format.version=0.8.2
> {noformat}
> I tried also using stock configuraiton with no luck.
> The error that I get is this:
> {noformat}
> 2017-06-07 12:24:45,476] INFO ConsumerConfig values:
>   auto.commit.interval.ms = 6
>   auto.offset.reset = latest
>   bootstrap.servers = [app454.sjc2.mytest.com:9092, 
> app455.sjc2.mytest.com:9092, app456.sjc2.mytest.com:9092, 
> app457.sjc2.mytest.com:9092, app458.sjc2.mytest.com:9092, 
> app459.sjc2.mytest.com:9092]
>   check.crcs = true
>   client.id = MirrorMaker_hkg1-1
>   connections.max.idle.ms = 54
>   enable.auto.commit = false
>   exclude.internal.topics = true
>   fetch.max.bytes = 52428800
>   fetch.max.wait.ms = 500
>   fetch.min.bytes = 1
>   group.id = MirrorMaker_hkg1
>   heartbeat.interval.ms = 3000
>   interceptor.classes = null
>   key.deserializer = class 
> org.apache.kafka.common.serialization.ByteArrayDeserializer
>   max.partition.fetch.bytes = 1048576
>   max.poll.interval.ms = 30
>   max.poll.records = 500
>   metadata.max.age.ms = 30
>   metric.reporters = []
>   metrics.num.samples = 2
>   metrics.recording.level = INFO
>   metrics.sample.window.ms = 3
>   partition.assignment.strategy = 
> [org.apache.kafka.clients.consumer.RoundRobinAssignor]
>   receive.buffer.bytes = 65536
>   reconnect.backoff.ms = 50
>   request.timeout.ms = 305000
>   retry.backoff.ms = 100
>   sasl.jaas.config = null
>   sasl.kerberos.kinit.cmd = /usr/bin/kinit
>   sasl.kerberos.min.time.before.relogin = 6
>   sasl.kerberos.service.name = null
>   sasl.kerberos.ticket.renew.jitter = 0.05
>   sasl.kerberos.ticket.renew.window.factor = 0.8
>   sasl.mechanism = GSSAPI
>   security.protocol = PLAINTEXT
>   send.buffer.bytes = 131072
>   session.timeout.ms = 1
>   ssl.cipher.suites = null
>   ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
>   ssl.endpoint.identification.algorithm = null
>   ssl.key.password = null
>   ssl.keymanager.algorithm = SunX509
>   ssl.keystore.location = null
>   ssl.keystore.password = null
>   ssl.keystore.type = JKS
>   ssl.protocol = TLS
>   ssl.provider = null
>   ssl.secure.random.implementation = null
>   ssl.trustmanager.algorithm = PKIX
>   ssl.truststore.location = null
>   ssl.truststore.password = null
>   ssl.truststore.type = 

[jira] [Commented] (KAFKA-5407) Mirrormaker dont start after upgrade

2017-09-16 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16168830#comment-16168830
 ] 

Manikumar commented on KAFKA-5407:
--

from broker logs:

{code}
[2017-09-14 15:16:56,688] ERROR [Group Metadata Manager on Broker 43]: 
Appending metadata message for group MirrorMaker_hkg1 generation 14 failed due 
to org.apache.kafka.common.errors.RecordTooLargeException, returning UNKNOWN 
error code to the client (kafka.coordinator.GroupMetadataManager)
[2017-09-14 15:16:56,690] INFO [GroupCoordinator 43]: Preparing to restabilize 
group MirrorMaker_hkg1 with old generation 14 
(kafka.coordinator.GroupCoordinator)
{code}

Try increasing max.message.bytes config on broker. 

cc [~hachikuji]

> Mirrormaker dont start after upgrade
> 
>
> Key: KAFKA-5407
> URL: https://issues.apache.org/jira/browse/KAFKA-5407
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.10.2.1
> Environment: Operating system
> CentOS 6.8
> HW
> Board Mfg : HP
>  Board Product : ProLiant DL380p Gen8
> CPU's x2
> Product Manufacturer  : Intel
>  Product Name  :  Intel(R) Xeon(R) CPU E5-2660 v2 @ 2.20GHz
>  Memory Type   : DDR3 SDRAM
>  SDRAM Capacity: 2048 MB
>  Total Memory: : 64GB
> Hardrives size and layout:
> 9 drives using jbod
> drive size 3.6TB each
>Reporter: Fernando Vega
>Priority: Critical
> Attachments: broker.hkg1.new, debug.hkg1.new, 
> mirrormaker-repl-sjc2-to-hkg1.log.8
>
>
> Currently Im upgrading the cluster from 0.8.2-beta to 0.10.2.1
> So I followed the rolling procedure:
> Here the config files:
> Consumer
> {noformat}
> #
> # Cluster: repl
> # Topic list(goes into command line): 
> REPL-ams1-global,REPL-atl1-global,REPL-sjc2-global,REPL-ams1-global-PN_HXIDMAP_.*,REPL-atl1-global-PN_HXIDMAP_.*,REPL-sjc2-global-PN_HXIDMAP_.*,REPL-ams1-global-PN_HXCONTEXTUALV2_.*,REPL-atl1-global-PN_HXCONTEXTUALV2_.*,REPL-sjc2-global-PN_HXCONTEXTUALV2_.*
> bootstrap.servers=app001:9092,app002:9092,app003:9092,app004:9092
> group.id=hkg1_cluster
> auto.commit.interval.ms=6
> partition.assignment.strategy=org.apache.kafka.clients.consumer.RoundRobinAssignor
> {noformat}
> Producer
> {noformat}
>  hkg1
> # # Producer
> # # hkg1
> bootstrap.servers=app001:9092,app002:9092,app003:9092,app004:9092
> compression.type=gzip
> acks=0
> {noformat}
> Broker
> {noformat}
> auto.leader.rebalance.enable=true
> delete.topic.enable=true
> socket.receive.buffer.bytes=1048576
> socket.send.buffer.bytes=1048576
> default.replication.factor=2
> auto.create.topics.enable=true
> num.partitions=1
> num.network.threads=8
> num.io.threads=40
> log.retention.hours=1
> log.roll.hours=1
> num.replica.fetchers=8
> zookeeper.connection.timeout.ms=3
> zookeeper.session.timeout.ms=3
> inter.broker.protocol.version=0.10.2
> log.message.format.version=0.8.2
> {noformat}
> I tried also using stock configuraiton with no luck.
> The error that I get is this:
> {noformat}
> 2017-06-07 12:24:45,476] INFO ConsumerConfig values:
>   auto.commit.interval.ms = 6
>   auto.offset.reset = latest
>   bootstrap.servers = [app454.sjc2.mytest.com:9092, 
> app455.sjc2.mytest.com:9092, app456.sjc2.mytest.com:9092, 
> app457.sjc2.mytest.com:9092, app458.sjc2.mytest.com:9092, 
> app459.sjc2.mytest.com:9092]
>   check.crcs = true
>   client.id = MirrorMaker_hkg1-1
>   connections.max.idle.ms = 54
>   enable.auto.commit = false
>   exclude.internal.topics = true
>   fetch.max.bytes = 52428800
>   fetch.max.wait.ms = 500
>   fetch.min.bytes = 1
>   group.id = MirrorMaker_hkg1
>   heartbeat.interval.ms = 3000
>   interceptor.classes = null
>   key.deserializer = class 
> org.apache.kafka.common.serialization.ByteArrayDeserializer
>   max.partition.fetch.bytes = 1048576
>   max.poll.interval.ms = 30
>   max.poll.records = 500
>   metadata.max.age.ms = 30
>   metric.reporters = []
>   metrics.num.samples = 2
>   metrics.recording.level = INFO
>   metrics.sample.window.ms = 3
>   partition.assignment.strategy = 
> [org.apache.kafka.clients.consumer.RoundRobinAssignor]
>   receive.buffer.bytes = 65536
>   reconnect.backoff.ms = 50
>   request.timeout.ms = 305000
>   retry.backoff.ms = 100
>   sasl.jaas.config = null
>   sasl.kerberos.kinit.cmd = /usr/bin/kinit
>   sasl.kerberos.min.time.before.relogin = 6
>   sasl.kerberos.service.name = null
>   sasl.kerberos.ticket.renew.jitter = 0.05
>   sasl.kerberos.ticket.renew.window.factor = 0.8
>   sasl.mechanism = GSSAPI
>   security.protocol = PLAINTEXT
>   send.buffer.bytes = 131072
>   session.timeout.ms = 1
>   ssl.cipher.suites = null
>   

[jira] [Resolved] (KAFKA-5917) Kafka not starting

2017-09-18 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5917.
--
Resolution: Won't Fix

These kinds of issues can be avoided once we completely move Kafka tools to 
Java Admin API.

> Kafka not starting
> --
>
> Key: KAFKA-5917
> URL: https://issues.apache.org/jira/browse/KAFKA-5917
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 0.10.0.0
>Reporter: Balu
>
> Getting this error in kafka,zookeeper,schema repository cluster.
>  FATAL [Kafka Server 3], Fatal error during KafkaServer startup. Prepare to 
> shutdown (kafka.server.KafkaServer)
> java.lang.IllegalArgumentException: requirement failed
> at scala.Predef$.require(Predef.scala:212)
> at 
> kafka.server.DynamicConfigManager$ConfigChangedNotificationHandler$.processNotification(DynamicConfigManager.scala:90)
> at 
> kafka.common.ZkNodeChangeNotificationListener$$anonfun$kafka$common$ZkNodeChangeNotificationListener$$processNotifications$2$$anonfun$apply$2.apply(ZkNodeChangeNotificationListener.scala:95)
> at 
> kafka.common.ZkNodeChangeNotificationListener$$anonfun$kafka$common$ZkNodeChangeNotificationListener$$processNotifications$2$$anonfun$apply$2.apply(ZkNodeChangeNotificationListener.scala:95)
> at scala.Option.map(Option.scala:146)
> at 
> kafka.common.ZkNodeChangeNotificationListener$$anonfun$kafka$common$ZkNodeChangeNotificationListener$$processNotifications$2.apply(ZkNodeChangeNotificationListener.scala:95)
> at 
> kafka.common.ZkNodeChangeNotificationListener$$anonfun$kafka$common$ZkNodeChangeNotificationListener$$processNotifications$2.apply(ZkNodeChangeNotificationListener.scala:90)
> at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
> at 
> kafka.common.ZkNodeChangeNotificationListener.kafka$common$ZkNodeChangeNotificationListener$$processNotifications(ZkNodeChangeNotificationListener.scala:90)
> at 
> kafka.common.ZkNodeChangeNotificationListener.processAllNotifications(ZkNodeChangeNotificationListener.scala:79)
> at 
> kafka.common.ZkNodeChangeNotificationListener.init(ZkNodeChangeNotificationListener.scala:67)
> at 
> kafka.server.DynamicConfigManager.startup(DynamicConfigManager.scala:122)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:233)
> at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:37)
> at kafka.Kafka$.main(Kafka.scala:67)
> at kafka.Kafka.main(Kafka.scala)
> Please help



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5917) Kafka not starting

2017-09-18 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16169714#comment-16169714
 ] 

Manikumar commented on KAFKA-5917:
--

This happens when you run Kafka-configs.sh script with new versions Kafka libs 
on an older version of Kafka server.  In my guess, you are mostly running newer 
versions of KafkaManager against older versions of Kafka Cluster.

 you need to manually remove /config/changes/config_change_ zk path to 
restore kafka.

> Kafka not starting
> --
>
> Key: KAFKA-5917
> URL: https://issues.apache.org/jira/browse/KAFKA-5917
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 0.10.0.0
>Reporter: Balu
>
> Getting this error in kafka,zookeeper,schema repository cluster.
>  FATAL [Kafka Server 3], Fatal error during KafkaServer startup. Prepare to 
> shutdown (kafka.server.KafkaServer)
> java.lang.IllegalArgumentException: requirement failed
> at scala.Predef$.require(Predef.scala:212)
> at 
> kafka.server.DynamicConfigManager$ConfigChangedNotificationHandler$.processNotification(DynamicConfigManager.scala:90)
> at 
> kafka.common.ZkNodeChangeNotificationListener$$anonfun$kafka$common$ZkNodeChangeNotificationListener$$processNotifications$2$$anonfun$apply$2.apply(ZkNodeChangeNotificationListener.scala:95)
> at 
> kafka.common.ZkNodeChangeNotificationListener$$anonfun$kafka$common$ZkNodeChangeNotificationListener$$processNotifications$2$$anonfun$apply$2.apply(ZkNodeChangeNotificationListener.scala:95)
> at scala.Option.map(Option.scala:146)
> at 
> kafka.common.ZkNodeChangeNotificationListener$$anonfun$kafka$common$ZkNodeChangeNotificationListener$$processNotifications$2.apply(ZkNodeChangeNotificationListener.scala:95)
> at 
> kafka.common.ZkNodeChangeNotificationListener$$anonfun$kafka$common$ZkNodeChangeNotificationListener$$processNotifications$2.apply(ZkNodeChangeNotificationListener.scala:90)
> at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
> at 
> kafka.common.ZkNodeChangeNotificationListener.kafka$common$ZkNodeChangeNotificationListener$$processNotifications(ZkNodeChangeNotificationListener.scala:90)
> at 
> kafka.common.ZkNodeChangeNotificationListener.processAllNotifications(ZkNodeChangeNotificationListener.scala:79)
> at 
> kafka.common.ZkNodeChangeNotificationListener.init(ZkNodeChangeNotificationListener.scala:67)
> at 
> kafka.server.DynamicConfigManager.startup(DynamicConfigManager.scala:122)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:233)
> at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:37)
> at kafka.Kafka$.main(Kafka.scala:67)
> at kafka.Kafka.main(Kafka.scala)
> Please help



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5015) SASL/SCRAM authentication failures are hidden

2017-09-22 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5015.
--
Resolution: Duplicate

Resolving as duplicate of KAFKA-4764

> SASL/SCRAM authentication failures are hidden
> -
>
> Key: KAFKA-5015
> URL: https://issues.apache.org/jira/browse/KAFKA-5015
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.10.2.0
>Reporter: Johan Ström
>
> During experimentation with multiple brokers and SCRAM authentication, the 
> brokers didn't seem to connect properly.
> Apparently the receiving server does not log connection failures (and their 
> cause) unless you enable DEBUG logging on 
> org.apache.kafka.common.network.Selector.
> Expected: that the rejected connections is logged (without stack trace) 
> without having to enable DEBUG. 
> (The root cause of my problem was that I hadn't yet added the user to the 
> Zk-backed SCRAM configuration)
> The controller flooded controller.log with WARNs:
> {code}
> [2017-04-05 15:33:42,850] WARN [Controller-1-to-broker-1-send-thread], 
> Controller 1's connection to broker kafka02:9093 (id: 1 rack: null) was 
> unsuccessful (kafka.controller.RequestSendThread)
> java.io.IOException: Connection to kafka02:9093 (id: 1 rack: null) failed
> {code}
> The peer does not log anything in any log, until debugging was enabled:
> {code}
> [2017-04-05 15:28:58,373] DEBUG Accepted connection from /10.10.0.5:43670 on 
> /10.10.0.6:9093 and assigned it to processor 4, sendBufferSize 
> [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: 
> [102400|102400] (kafka.network.Acceptor)
> [2017-04-05 15:28:58,374] DEBUG Processor 4 listening to new connection from 
> /10.10.0.5:43670 (kafka.network.Processor)
> [2017-04-05 15:28:58,376] DEBUG Set SASL server state to HANDSHAKE_REQUEST 
> (org.apache.kafka.common.security.authenticator.SaslServerAuthenticator)
> [2017-04-05 15:28:58,376] DEBUG Handle Kafka request SASL_HANDSHAKE 
> (org.apache.kafka.common.security.authenticator.SaslServerAuthenticator)
> [2017-04-05 15:28:58,378] DEBUG Using SASL mechanism 'SCRAM-SHA-512' provided 
> by client 
> (org.apache.kafka.common.security.authenticator.SaslServerAuthenticator)
> [2017-04-05 15:28:58,381] DEBUG Setting SASL/SCRAM_SHA_512 server state to 
> RECEIVE_CLIENT_FIRST_MESSAGE 
> (org.apache.kafka.common.security.scram.ScramSaslServer)
> [2017-04-05 15:28:58,381] DEBUG Set SASL server state to AUTHENTICATE 
> (org.apache.kafka.common.security.authenticator.SaslServerAuthenticator)
> [2017-04-05 15:28:58,383] DEBUG Setting SASL/SCRAM_SHA_512 server state to 
> FAILED (org.apache.kafka.common.security.scram.ScramSaslServer)
> [2017-04-05 15:28:58,383] DEBUG Set SASL server state to FAILED 
> (org.apache.kafka.common.security.authenticator.SaslServerAuthenticator)
> [2017-04-05 15:28:58,385] DEBUG Connection with /10.10.0.5 disconnected 
> (org.apache.kafka.common.network.Selector)
> java.io.IOException: javax.security.sasl.SaslException: Authentication 
> failed: Credentials could not be obtained [Caused by 
> javax.security.sasl.SaslException: Authentication failed: Invalid user 
> credentials]
>   at 
> org.apache.kafka.common.security.authenticator.SaslServerAuthenticator.authenticate(SaslServerAuthenticator.java:250)
>   at 
> org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:71)
>   at 
> org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:350)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:303)
>   at kafka.network.Processor.poll(SocketServer.scala:494)
>   at kafka.network.Processor.run(SocketServer.scala:432)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: javax.security.sasl.SaslException: Authentication failed: 
> Credentials could not be obtained [Caused by 
> javax.security.sasl.SaslException: Authentication failed: Invalid user 
> credentials]
>   at 
> org.apache.kafka.common.security.scram.ScramSaslServer.evaluateResponse(ScramSaslServer.java:104)
>   at 
> org.apache.kafka.common.security.authenticator.SaslServerAuthenticator.authenticate(SaslServerAuthenticator.java:235)
>   ... 6 more
> Caused by: javax.security.sasl.SaslException: Authentication failed: Invalid 
> user credentials
>   at 
> org.apache.kafka.common.security.scram.ScramSaslServer.evaluateResponse(ScramSaslServer.java:94)
>   ... 7 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5910) Kafka 0.11.0.1 Kafka consumer/producers retries in infinite loop when wrong SASL creds are passed

2017-09-22 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5910.
--
Resolution: Duplicate

Resolving as duplicate of KAFKA-4764

> Kafka 0.11.0.1 Kafka consumer/producers retries in infinite loop when wrong 
> SASL creds are passed
> -
>
> Key: KAFKA-5910
> URL: https://issues.apache.org/jira/browse/KAFKA-5910
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.11.0.0
>Reporter: Ramkumar
>
> Similar to https://issues.apache.org/jira/browse/KAFKA-4764 , but the status 
> shows patch available but the client wont disconnects after getting the 
> warning.
> Issue 1:
> Publisher flow:
> Kafka publisher goes into infinite loop if the AAF credentials are wrong when 
> authenticating in Kaka broker.
> Detail:
> If the correct user name and password are used at the kafka publisher client 
> side to connect to kafka broker, then it authenticates and authorizes fine.
> If  incorrect username or password is used at the kafka publisher client 
> side, then broker logs shows a continuous (infinite loop)  log showing client 
> is trying to reconnect the broker as it doesn’t get authentication failure 
> exception from broker. 
> JIRA defect in apache:
> https://issues.apache.org/jira/browse/KAFKA-4764
> Can you pls let me know if this issue is resolved in kafka 0.11.0.1 version 
> or still an open issue?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (KAFKA-5917) Kafka not starting

2017-09-18 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16169714#comment-16169714
 ] 

Manikumar edited comment on KAFKA-5917 at 9/18/17 7:56 AM:
---

This happens when you run Kafka-configs.sh script with new versions Kafka libs 
against an older version of Kafka server.  In my guess, you are mostly running 
newer versions of KafkaManager against older versions of Kafka Cluster.

 you need to manually remove /config/changes/config_change_ zk path to 
restore kafka.


was (Author: omkreddy):
This happens when you run Kafka-configs.sh script with new versions Kafka libs 
on an older version of Kafka server.  In my guess, you are mostly running newer 
versions of KafkaManager against older versions of Kafka Cluster.

 you need to manually remove /config/changes/config_change_ zk path to 
restore kafka.

> Kafka not starting
> --
>
> Key: KAFKA-5917
> URL: https://issues.apache.org/jira/browse/KAFKA-5917
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 0.10.0.0
>Reporter: Balu
>
> Getting this error in kafka,zookeeper,schema repository cluster.
>  FATAL [Kafka Server 3], Fatal error during KafkaServer startup. Prepare to 
> shutdown (kafka.server.KafkaServer)
> java.lang.IllegalArgumentException: requirement failed
> at scala.Predef$.require(Predef.scala:212)
> at 
> kafka.server.DynamicConfigManager$ConfigChangedNotificationHandler$.processNotification(DynamicConfigManager.scala:90)
> at 
> kafka.common.ZkNodeChangeNotificationListener$$anonfun$kafka$common$ZkNodeChangeNotificationListener$$processNotifications$2$$anonfun$apply$2.apply(ZkNodeChangeNotificationListener.scala:95)
> at 
> kafka.common.ZkNodeChangeNotificationListener$$anonfun$kafka$common$ZkNodeChangeNotificationListener$$processNotifications$2$$anonfun$apply$2.apply(ZkNodeChangeNotificationListener.scala:95)
> at scala.Option.map(Option.scala:146)
> at 
> kafka.common.ZkNodeChangeNotificationListener$$anonfun$kafka$common$ZkNodeChangeNotificationListener$$processNotifications$2.apply(ZkNodeChangeNotificationListener.scala:95)
> at 
> kafka.common.ZkNodeChangeNotificationListener$$anonfun$kafka$common$ZkNodeChangeNotificationListener$$processNotifications$2.apply(ZkNodeChangeNotificationListener.scala:90)
> at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
> at 
> kafka.common.ZkNodeChangeNotificationListener.kafka$common$ZkNodeChangeNotificationListener$$processNotifications(ZkNodeChangeNotificationListener.scala:90)
> at 
> kafka.common.ZkNodeChangeNotificationListener.processAllNotifications(ZkNodeChangeNotificationListener.scala:79)
> at 
> kafka.common.ZkNodeChangeNotificationListener.init(ZkNodeChangeNotificationListener.scala:67)
> at 
> kafka.server.DynamicConfigManager.startup(DynamicConfigManager.scala:122)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:233)
> at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:37)
> at kafka.Kafka$.main(Kafka.scala:67)
> at kafka.Kafka.main(Kafka.scala)
> Please help



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6042) Kafka Request Handler deadlocks and brings down the cluster.

2017-10-10 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16198453#comment-16198453
 ] 

Manikumar commented on KAFKA-6042:
--

This may be related KAFKA-5970

> Kafka Request Handler deadlocks and brings down the cluster.
> 
>
> Key: KAFKA-6042
> URL: https://issues.apache.org/jira/browse/KAFKA-6042
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.11.0.0, 0.11.0.1
> Environment: kafka version: 0.11.0.1
> client versions: 0.8.2.1-0.10.2.1
> platform: aws (eu-west-1a)
> nodes: 36 x r4.xlarge
> disk storage: 2.5 tb per node (~73% usage per node)
> topics: 250
> number of partitions: 48k (approx)
> os: ubuntu 14.04
> jvm: Java(TM) SE Runtime Environment (build 1.8.0_131-b11), Java HotSpot(TM) 
> 64-Bit Server VM (build 25.131-b11, mixed mode)
>Reporter: Ben Corlett
>Priority: Critical
> Attachments: thread_dump.txt.gz
>
>
> We have been experiencing a deadlock that happens on a consistent server 
> within our cluster. This happens multiple times a week currently. It first 
> started happening when we upgraded to 0.11.0.0. Sadly 0.11.0.1 failed to 
> resolve the issue.
> Sequence of events:
> At a seemingly random time broker 125 goes into a deadlock. As soon as it is 
> deadlocked it will remove all the ISR's for any partition is its the leader 
> for.
> [2017-10-10 00:06:10,061] INFO Partition [XX,24] on broker 125: 
> Shrinking ISR from 117,125 to 125 (kafka.cluster.Partition)
> [2017-10-10 00:06:10,073] INFO Partition [XX,974] on broker 125: 
> Shrinking ISR from 117,125 to 125 (kafka.cluster.Partition)
> [2017-10-10 00:06:10,079] INFO Partition [XX,64] on broker 125: 
> Shrinking ISR from 117,125 to 125 (kafka.cluster.Partition)
> [2017-10-10 00:06:10,081] INFO Partition [XX,21] on broker 125: 
> Shrinking ISR from 117,125 to 125 (kafka.cluster.Partition)
> [2017-10-10 00:06:10,084] INFO Partition [XX,12] on broker 125: 
> Shrinking ISR from 117,125 to 125 (kafka.cluster.Partition)
> [2017-10-10 00:06:10,085] INFO Partition [XX,61] on broker 125: 
> Shrinking ISR from 117,125 to 125 (kafka.cluster.Partition)
> [2017-10-10 00:06:10,086] INFO Partition [XX,53] on broker 125: 
> Shrinking ISR from 117,125 to 125 (kafka.cluster.Partition)
> [2017-10-10 00:06:10,088] INFO Partition [XX,27] on broker 125: 
> Shrinking ISR from 117,125 to 125 (kafka.cluster.Partition)
> [2017-10-10 00:06:10,090] INFO Partition [XX,182] on broker 125: 
> Shrinking ISR from 117,125 to 125 (kafka.cluster.Partition)
> [2017-10-10 00:06:10,091] INFO Partition [XX,16] on broker 125: 
> Shrinking ISR from 117,125 to 125 (kafka.cluster.Partition)
> 
> The other nodes fail to connect to the node 125 
> [2017-10-10 00:08:42,318] WARN [ReplicaFetcherThread-0-125]: Error in fetch 
> to broker 125, request (type=FetchRequest, replicaId=101, maxWait=500, 
> minBytes=1, maxBytes=10485760, fetchData={XX-94=(offset=0, 
> logStartOffset=0, maxBytes=1048576), XX-22=(offset=0, 
> logStartOffset=0, maxBytes=1048576), XX-58=(offset=0, 
> logStartOffset=0, maxBytes=1048576), XX-11=(offset=78932482, 
> logStartOffset=50881481, maxBytes=1048576), XX-55=(offset=0, 
> logStartOffset=0, maxBytes=1048576), XX-19=(offset=0, 
> logStartOffset=0, maxBytes=1048576), XX-91=(offset=0, 
> logStartOffset=0, maxBytes=1048576), XX-5=(offset=903857106, 
> logStartOffset=0, maxBytes=1048576), XX-80=(offset=0, 
> logStartOffset=0, maxBytes=1048576), XX-88=(offset=0, 
> logStartOffset=0, maxBytes=1048576), XX-34=(offset=308, 
> logStartOffset=308, maxBytes=1048576), XX-7=(offset=369990, 
> logStartOffset=369990, maxBytes=1048576), XX-0=(offset=57965795, 
> logStartOffset=0, maxBytes=1048576)}) (kafka.server.ReplicaFetcherThread)
> java.io.IOException: Connection to 125 was disconnected before the response 
> was read
> at 
> org.apache.kafka.clients.NetworkClientUtils.sendAndReceive(NetworkClientUtils.java:93)
> at 
> kafka.server.ReplicaFetcherBlockingSend.sendRequest(ReplicaFetcherBlockingSend.scala:93)
> at 
> kafka.server.ReplicaFetcherThread.fetch(ReplicaFetcherThread.scala:207)
> at 
> kafka.server.ReplicaFetcherThread.fetch(ReplicaFetcherThread.scala:42)
> at 
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:151)
> at 
> kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:112)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:64)
> As node 125 removed all the ISRs as it was locking up, a failover for any 
> partition without an unclean leader election is not possible. This breaks any 
> partition 

[jira] [Commented] (KAFKA-6022) mirror maker stores offset in zookeeper

2017-10-10 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16198416#comment-16198416
 ] 

Manikumar commented on KAFKA-6022:
--

Since 0.10.1.0,  no need to pass --new-consumer option. You just need to pass 
broker connect string (bootstrap.servers) in consumer.config file
Check here: http://kafka.apache.org/documentation.html#upgrade_1010_notable

check for below question in the FAQ section.
How do we migrate to committing offsets to Kafka (rather than Zookeeper) in 
0.8.2?
https://cwiki.apache.org/confluence/display/KAFKA/FAQ

Yes, If you have an option, it is better to mirror cluster from scratch using 
new consumer.

Pl close the JIRA, if it solves your issue.



> mirror maker stores offset in zookeeper
> ---
>
> Key: KAFKA-6022
> URL: https://issues.apache.org/jira/browse/KAFKA-6022
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ronald van de Kuil
>Priority: Minor
>
> I happened to notice that the mirror maker stores its offset in zookeeper. 
> I do not see it using:
> bin/kafka-consumer-groups.sh --bootstrap-server pi1:9092 --new-consumer --list
> I do however see consumers that store their offset in kafka.
> I would guess that storing the offset in zookeeper is old style?
> Would it be an idea to update the mirror maker to the new consumer style?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6042) Kafka Request Handler deadlocks and brings down the cluster.

2017-10-10 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16198497#comment-16198497
 ] 

Manikumar commented on KAFKA-6042:
--

It would we great if you can use KAFKA-5970 and validate in your cluster.

Kafka Community has not yet started discussion on 0.11.0.2 release.  




> Kafka Request Handler deadlocks and brings down the cluster.
> 
>
> Key: KAFKA-6042
> URL: https://issues.apache.org/jira/browse/KAFKA-6042
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.11.0.0, 0.11.0.1
> Environment: kafka version: 0.11.0.1
> client versions: 0.8.2.1-0.10.2.1
> platform: aws (eu-west-1a)
> nodes: 36 x r4.xlarge
> disk storage: 2.5 tb per node (~73% usage per node)
> topics: 250
> number of partitions: 48k (approx)
> os: ubuntu 14.04
> jvm: Java(TM) SE Runtime Environment (build 1.8.0_131-b11), Java HotSpot(TM) 
> 64-Bit Server VM (build 25.131-b11, mixed mode)
>Reporter: Ben Corlett
>Priority: Critical
> Attachments: thread_dump.txt.gz
>
>
> We have been experiencing a deadlock that happens on a consistent server 
> within our cluster. This happens multiple times a week currently. It first 
> started happening when we upgraded to 0.11.0.0. Sadly 0.11.0.1 failed to 
> resolve the issue.
> Sequence of events:
> At a seemingly random time broker 125 goes into a deadlock. As soon as it is 
> deadlocked it will remove all the ISR's for any partition is its the leader 
> for.
> [2017-10-10 00:06:10,061] INFO Partition [XX,24] on broker 125: 
> Shrinking ISR from 117,125 to 125 (kafka.cluster.Partition)
> [2017-10-10 00:06:10,073] INFO Partition [XX,974] on broker 125: 
> Shrinking ISR from 117,125 to 125 (kafka.cluster.Partition)
> [2017-10-10 00:06:10,079] INFO Partition [XX,64] on broker 125: 
> Shrinking ISR from 117,125 to 125 (kafka.cluster.Partition)
> [2017-10-10 00:06:10,081] INFO Partition [XX,21] on broker 125: 
> Shrinking ISR from 117,125 to 125 (kafka.cluster.Partition)
> [2017-10-10 00:06:10,084] INFO Partition [XX,12] on broker 125: 
> Shrinking ISR from 117,125 to 125 (kafka.cluster.Partition)
> [2017-10-10 00:06:10,085] INFO Partition [XX,61] on broker 125: 
> Shrinking ISR from 117,125 to 125 (kafka.cluster.Partition)
> [2017-10-10 00:06:10,086] INFO Partition [XX,53] on broker 125: 
> Shrinking ISR from 117,125 to 125 (kafka.cluster.Partition)
> [2017-10-10 00:06:10,088] INFO Partition [XX,27] on broker 125: 
> Shrinking ISR from 117,125 to 125 (kafka.cluster.Partition)
> [2017-10-10 00:06:10,090] INFO Partition [XX,182] on broker 125: 
> Shrinking ISR from 117,125 to 125 (kafka.cluster.Partition)
> [2017-10-10 00:06:10,091] INFO Partition [XX,16] on broker 125: 
> Shrinking ISR from 117,125 to 125 (kafka.cluster.Partition)
> 
> The other nodes fail to connect to the node 125 
> [2017-10-10 00:08:42,318] WARN [ReplicaFetcherThread-0-125]: Error in fetch 
> to broker 125, request (type=FetchRequest, replicaId=101, maxWait=500, 
> minBytes=1, maxBytes=10485760, fetchData={XX-94=(offset=0, 
> logStartOffset=0, maxBytes=1048576), XX-22=(offset=0, 
> logStartOffset=0, maxBytes=1048576), XX-58=(offset=0, 
> logStartOffset=0, maxBytes=1048576), XX-11=(offset=78932482, 
> logStartOffset=50881481, maxBytes=1048576), XX-55=(offset=0, 
> logStartOffset=0, maxBytes=1048576), XX-19=(offset=0, 
> logStartOffset=0, maxBytes=1048576), XX-91=(offset=0, 
> logStartOffset=0, maxBytes=1048576), XX-5=(offset=903857106, 
> logStartOffset=0, maxBytes=1048576), XX-80=(offset=0, 
> logStartOffset=0, maxBytes=1048576), XX-88=(offset=0, 
> logStartOffset=0, maxBytes=1048576), XX-34=(offset=308, 
> logStartOffset=308, maxBytes=1048576), XX-7=(offset=369990, 
> logStartOffset=369990, maxBytes=1048576), XX-0=(offset=57965795, 
> logStartOffset=0, maxBytes=1048576)}) (kafka.server.ReplicaFetcherThread)
> java.io.IOException: Connection to 125 was disconnected before the response 
> was read
> at 
> org.apache.kafka.clients.NetworkClientUtils.sendAndReceive(NetworkClientUtils.java:93)
> at 
> kafka.server.ReplicaFetcherBlockingSend.sendRequest(ReplicaFetcherBlockingSend.scala:93)
> at 
> kafka.server.ReplicaFetcherThread.fetch(ReplicaFetcherThread.scala:207)
> at 
> kafka.server.ReplicaFetcherThread.fetch(ReplicaFetcherThread.scala:42)
> at 
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:151)
> at 
> kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:112)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:64)
> As node 125 removed all the ISRs as it was locking 

[jira] [Resolved] (KAFKA-4504) Details of retention.bytes property at Topic level are not clear on how they impact partition size

2017-10-17 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4504.
--
   Resolution: Fixed
 Assignee: Manikumar
Fix Version/s: 1.0.0

> Details of retention.bytes property at Topic level are not clear on how they 
> impact partition size
> --
>
> Key: KAFKA-4504
> URL: https://issues.apache.org/jira/browse/KAFKA-4504
> Project: Kafka
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 0.10.0.1
>Reporter: Justin Manchester
>Assignee: Manikumar
> Fix For: 1.0.0
>
>
> Problem:
> Details of retention.bytes property at Topic level are not clear on how they 
> impact partition size
> Business Impact:
> Users are setting retention.bytes and not seeing the desired store amount of 
> data.
> Current Text:
> This configuration controls the maximum size a log can grow to before we will 
> discard old log segments to free up space if we are using the "delete" 
> retention policy. By default there is no size limit only a time limit.
> Proposed change:
> This configuration controls the maximum size a log can grow to before we will 
> discard old log segments to free up space if we are using the "delete" 
> retention policy. By default there is no size limit only a time limit.  
> Please note, this is calculated as retention.bytes * number of partitions on 
> the given topic for the total  amount of disk space to be used.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5855) records-lag is always zero

2017-09-08 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16158559#comment-16158559
 ] 

Manikumar commented on KAFKA-5855:
--

I am able to generate some lag. By default, consumer reads from latest,  if 
there is no previous saved offset for that group.   You can produce some 10K 
records and try to consume from the beginning. This should show some lag.


> records-lag is always zero
> --
>
> Key: KAFKA-5855
> URL: https://issues.apache.org/jira/browse/KAFKA-5855
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, metrics
>Affects Versions: 0.10.1.0, 0.11.0.0
>Reporter: Mohsen Zainalpour
>Priority: Minor
>
> I use Prometheus JmxExporter to export Kafka consumer metrics but the value 
> of records-lag is always zero.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4638) Outdated and misleading link in section "End-to-end Batch Compression"

2017-09-08 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4638.
--
Resolution: Duplicate

Resolving as duplicate of  KAFKA-5686. PR Raised for KAFKA-5686

> Outdated and misleading link in section "End-to-end Batch Compression"
> --
>
> Key: KAFKA-4638
> URL: https://issues.apache.org/jira/browse/KAFKA-4638
> Project: Kafka
>  Issue Type: Bug
>  Components: compression
>Affects Versions: 0.10.1.1
> Environment: Documentation
>Reporter: Vincent Tieleman
>Priority: Minor
>
> The section "End-to-end Batch Compression" mentions the following:
> "Kafka supports GZIP, Snappy and LZ4 compression protocols. More details on 
> compression can be found here."
> When looking up the "here" link, 
> https://cwiki.apache.org/confluence/display/KAFKA/Compression, it describes 
> not only the mechanism, but also gives the configuration properties to use. 
> The properties provided are "compression.codec" and "compression.topics", but 
> these are no longer used. Instead, one should use "compression.type", where 
> one can directly specify which compression to use (i.e. currently gzip, 
> snappy or lz4). The link also fails to mention lz4 compression, which is 
> supported by now.
> Not aware that this information is outdated, I spent quite some time before 
> finding out that I was using the wrong properties. I imagine this must happen 
> to more people.
> I suggest adding a small remark to the link, so that people know it is 
> outdated. Furthermore, I would add that the latest configuration properties 
> can be found in the "Producer config" section.
> The alternative would be to simply remove the link all together, as there is 
> too much outdated information in it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5592) Connection with plain client to SSL-secured broker causes OOM

2017-09-08 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5592.
--
Resolution: Duplicate

Marking this as duplicate of KAFKA-4493.

> Connection with plain client to SSL-secured broker causes OOM
> -
>
> Key: KAFKA-5592
> URL: https://issues.apache.org/jira/browse/KAFKA-5592
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.11.0.0
> Environment: Linux x86_64 x86_64 x86_64 GNU/Linux
>Reporter: Marcin Łuczyński
> Attachments: heapdump.20170713.100129.14207.0002.phd, Heap.PNG, 
> javacore.20170713.100129.14207.0003.txt, Snap.20170713.100129.14207.0004.trc, 
> Stack.PNG
>
>
> While testing connection with client app that does not have configured 
> truststore with a Kafka broker secured by SSL, my JVM crashes with 
> OutOfMemoryError. I saw it mixed with StackOverfowError. I attach dump files.
> The stack trace to start with is here:
> {quote}at java/nio/HeapByteBuffer. (HeapByteBuffer.java:57) 
> at java/nio/ByteBuffer.allocate(ByteBuffer.java:331) 
> at 
> org/apache/kafka/common/network/NetworkReceive.readFromReadableChannel(NetworkReceive.java:93)
>  
> at 
> org/apache/kafka/common/network/NetworkReceive.readFrom(NetworkReceive.java:71)
>  
> at 
> org/apache/kafka/common/network/KafkaChannel.receive(KafkaChannel.java:169) 
> at org/apache/kafka/common/network/KafkaChannel.read(KafkaChannel.java:150) 
> at 
> org/apache/kafka/common/network/Selector.pollSelectionKeys(Selector.java:355) 
> at org/apache/kafka/common/network/Selector.poll(Selector.java:303) 
> at org/apache/kafka/clients/NetworkClient.poll(NetworkClient.java:349) 
> at 
> org/apache/kafka/clients/consumer/internals/ConsumerNetworkClient.poll(ConsumerNetworkClient.java:226)
>  
> at 
> org/apache/kafka/clients/consumer/internals/ConsumerNetworkClient.poll(ConsumerNetworkClient.java:188)
>  
> at 
> org/apache/kafka/clients/consumer/internals/AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:207)
>  
> at 
> org/apache/kafka/clients/consumer/internals/AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:193)
>  
> at 
> org/apache/kafka/clients/consumer/internals/ConsumerCoordinator.poll(ConsumerCoordinator.java:279)
>  
> at 
> org/apache/kafka/clients/consumer/KafkaConsumer.pollOnce(KafkaConsumer.java:1029)
>  
> at 
> org/apache/kafka/clients/consumer/KafkaConsumer.poll(KafkaConsumer.java:995) 
> at 
> com/ibm/is/cc/kafka/runtime/KafkaConsumerProcessor.process(KafkaConsumerProcessor.java:237)
>  
> at 
> com/ibm/is/cc/kafka/runtime/KafkaProcessor.process(KafkaProcessor.java:173) 
> at 
> com/ibm/is/cc/javastage/connector/CC_JavaAdapter.run(CC_JavaAdapter.java:443){quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5850) Py4JJavaError: An error occurred while calling o40.loadClass. : java.lang.ClassNotFoundException: org.apache.spark.streaming.kafka.KafkaUtilsPythonHelper

2017-09-08 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16158405#comment-16158405
 ] 

Manikumar commented on KAFKA-5850:
--

[~saurabhbidwai] This issue is not related to Kafka.  You may want to raise the 
issue on Spark project.

> Py4JJavaError: An error occurred while calling o40.loadClass. : 
> java.lang.ClassNotFoundException: 
> org.apache.spark.streaming.kafka.KafkaUtilsPythonHelper
> -
>
> Key: KAFKA-5850
> URL: https://issues.apache.org/jira/browse/KAFKA-5850
> Project: Kafka
>  Issue Type: Bug
>Reporter: Saurabh Bidwai
>
> ---
> Py4JJavaError Traceback (most recent call last)
>  in ()
> > 1 streamer(sc)
>  in streamer(sc)
>   5 pwords = 
> load_wordlist("/home/daasuser/twitter/kafkatweets/Twitter-Sentiment-Analysis-Using-Spark-Streaming-And-Kafka/Dataset/positive.txt")
>   6 nwords = 
> load_wordlist("/home/daasuser/twitter/kafkatweets/Twitter-Sentiment-Analysis-Using-Spark-Streaming-And-Kafka/Dataset/negative.txt")
> > 7 counts = stream(ssc, pwords, nwords, 600)
>   8 make_plot(counts)
>  in stream(ssc, pwords, nwords, duration)
>   1 def stream(ssc, pwords, nwords, duration):
> > 2 kstream = KafkaUtils.createDirectStream(ssc, topics = 
> ['twitterstream'], kafkaParams = {"metadata.broker.list": 
> ["dn1001:6667","dn2001:6667","dn3001:6667","dn4001:6667"]})
>   3 tweets = kstream.map(lambda x: x[1].encode("utf-8", "ignore"))
>   4 
>   5 # Each element of tweets will be the text of a tweet.
> /usr/hdp/current/spark-client/python/lib/pyspark.zip/pyspark/streaming/kafka.py
>  in createDirectStream(ssc, topics, kafkaParams, fromOffsets, keyDecoder, 
> valueDecoder, messageHandler)
> 150 if 'ClassNotFoundException' in str(e.java_exception):
> 151 KafkaUtils._printErrorMsg(ssc.sparkContext)
> --> 152 raise e
> 153 
> 154 stream = DStream(jstream, ssc, ser).map(func)
> Py4JJavaError: An error occurred while calling o40.loadClass.
> : java.lang.ClassNotFoundException: 
> org.apache.spark.streaming.kafka.KafkaUtilsPythonHelper
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
>   at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
>   at py4j.Gateway.invoke(Gateway.java:259)
>   at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
>   at py4j.commands.CallCommand.execute(CallCommand.java:79)
>   at py4j.GatewayConnection.run(GatewayConnection.java:209)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-251) The ConsumerStats MBean's PartOwnerStats attribute is a string

2017-09-08 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-251.
-
Resolution: Won't Fix

Closing inactive issue as per above comments.

> The ConsumerStats MBean's PartOwnerStats  attribute is a string
> ---
>
> Key: KAFKA-251
> URL: https://issues.apache.org/jira/browse/KAFKA-251
> Project: Kafka
>  Issue Type: Bug
>Reporter: Pierre-Yves Ritschard
> Attachments: 0001-Incorporate-Jun-Rao-s-comments-on-KAFKA-251.patch, 
> 0001-Provide-a-patch-for-KAFKA-251.patch
>
>
> The fact that the PartOwnerStats is a string prevents monitoring systems from 
> graphing consumer lag. There should be one mbean per [ topic, partition, 
> groupid ] group.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-1608) Windows: Error: Could not find or load main class org.apache.zookeeper.server.quorum.QuorumPeerMain

2017-09-08 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1608.
--
Resolution: Fixed

 This was fixed in newer versions. Pl reopen if you think the issue still exists


> Windows: Error: Could not find or load main class 
> org.apache.zookeeper.server.quorum.QuorumPeerMain
> ---
>
> Key: KAFKA-1608
> URL: https://issues.apache.org/jira/browse/KAFKA-1608
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.1.1
> Environment: Windows
>Reporter: Rakesh Komulwad
>Priority: Minor
>  Labels: windows
>
> When trying to start zookeeper getting the following error in Windows
> Error: Could not find or load main class 
> org.apache.zookeeper.server.quorum.QuorumPeerMain
> Fix for this is to edit windows\kafka-run-class.bat
> Change
> set BASE_DIR=%CD%\..
> to
> set BASE_DIR=%CD%\..\..
> Change
> for %%i in (%BASE_DIR%\core\lib\*.jar)
> to
> for %%i in (%BASE_DIR%\libs\*.jar)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-1817) AdminUtils.createTopic vs kafka-topics.sh --create with partitions

2017-09-08 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1817.
--
Resolution: Fixed

Fixed in  KAFKA-1737

> AdminUtils.createTopic vs kafka-topics.sh --create with partitions
> --
>
> Key: KAFKA-1817
> URL: https://issues.apache.org/jira/browse/KAFKA-1817
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.8.2.0
> Environment: debian linux current version  up to date
>Reporter: Jason Kania
>
> When topics are created using AdminUtils.createTopic in code, no partitions 
> folder is created The zookeeper shell shows this.
> ls /brokers/topics/foshizzle
> []
> However, when kafka-topics.sh --create is run, the partitions folder is 
> created:
> ls /brokers/topics/foshizzle
> [partitions]
> The unfortunately useless error message "KeeperErrorCode = NoNode for 
> /brokers/topics/periodicReading/partitions" makes it unclear what to do. When 
> the topics are listed via kafka-topics.sh, they appear to have been created 
> fine. It would be good if the exception was wrapped by Kafka to suggested 
> looking in the zookeeper shell so a person didn't have to dig around to 
> understand what the meaning of this path is...



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2319) After controlled shutdown: IllegalStateException: Kafka scheduler has not been started

2017-09-08 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2319.
--
Resolution: Fixed

This was fixed in newer versions.  Pl reopen if you think the issue still exists


> After controlled shutdown: IllegalStateException: Kafka scheduler has not 
> been started
> --
>
> Key: KAFKA-2319
> URL: https://issues.apache.org/jira/browse/KAFKA-2319
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Rosenberg
>
> Running 0.8.2.1, just saw this today at the end of a controlled shutdown.  It 
> doesn't happen every time, but I've seen it several times:
> {code}
> 2015-07-07 18:54:28,424  INFO [Thread-4] server.KafkaServer - [Kafka Server 
> 99], Controlled shutdown succeeded
> 2015-07-07 18:54:28,425  INFO [Thread-4] network.SocketServer - [Socket 
> Server on Broker 99], Shutting down
> 2015-07-07 18:54:28,435  INFO [Thread-4] network.SocketServer - [Socket 
> Server on Broker 99], Shutdown completed
> 2015-07-07 18:54:28,435  INFO [Thread-4] server.KafkaRequestHandlerPool - 
> [Kafka Request Handler on Broker 99], shutting down
> 2015-07-07 18:54:28,444  INFO [Thread-4] server.KafkaRequestHandlerPool - 
> [Kafka Request Handler on Broker 99], shut down completely
> 2015-07-07 18:54:28,649  INFO [Thread-4] server.ReplicaManager - [Replica 
> Manager on Broker 99]: Shut down
> 2015-07-07 18:54:28,649  INFO [Thread-4] server.ReplicaFetcherManager - 
> [ReplicaFetcherManager on broker 99] shutting down
> 2015-07-07 18:54:28,650  INFO [Thread-4] server.ReplicaFetcherThread - 
> [ReplicaFetcherThread-0-95], Shutting down
> 2015-07-07 18:54:28,750  INFO [Thread-4] server.ReplicaFetcherThread - 
> [ReplicaFetcherThread-0-95], Shutdown completed
> 2015-07-07 18:54:28,750  INFO [ReplicaFetcherThread-0-95] 
> server.ReplicaFetcherThread - [ReplicaFetcherThread-0-95], Stopped
> 2015-07-07 18:54:28,750  INFO [Thread-4] server.ReplicaFetcherThread - 
> [ReplicaFetcherThread-0-98], Shutting down
> 2015-07-07 18:54:28,791  INFO [Thread-4] server.ReplicaFetcherThread - 
> [ReplicaFetcherThread-0-98], Shutdown completed
> 2015-07-07 18:54:28,791  INFO [ReplicaFetcherThread-0-98] 
> server.ReplicaFetcherThread - [ReplicaFetcherThread-0-98], Stopped
> 2015-07-07 18:54:28,791  INFO [Thread-4] server.ReplicaFetcherManager - 
> [ReplicaFetcherManager on broker 99] shutdown completed
> 2015-07-07 18:54:28,819  INFO [Thread-4] server.ReplicaManager - [Replica 
> Manager on Broker 99]: Shut down completely
> 2015-07-07 18:54:28,826  INFO [Thread-4] log.LogManager - Shutting down.
> 2015-07-07 18:54:30,459  INFO [Thread-4] log.LogManager - Shutdown complete.
> 2015-07-07 18:54:30,463  WARN [Thread-4] utils.Utils$ - Kafka scheduler has 
> not been started
> java.lang.IllegalStateException: Kafka scheduler has not been started
> at kafka.utils.KafkaScheduler.ensureStarted(KafkaScheduler.scala:114)
> at kafka.utils.KafkaScheduler.shutdown(KafkaScheduler.scala:86)
> at 
> kafka.controller.KafkaController.onControllerResignation(KafkaController.scala:350)
> at 
> kafka.controller.KafkaController.shutdown(KafkaController.scala:664)
> at 
> kafka.server.KafkaServer$$anonfun$shutdown$8.apply$mcV$sp(KafkaServer.scala:285)
> at kafka.utils.Utils$.swallow(Utils.scala:172)
> at kafka.utils.Logging$class.swallowWarn(Logging.scala:92)
> at kafka.utils.Utils$.swallowWarn(Utils.scala:45)
> at kafka.utils.Logging$class.swallow(Logging.scala:94)
> at kafka.utils.Utils$.swallow(Utils.scala:45)
> at kafka.server.KafkaServer.shutdown(KafkaServer.scala:285)
>  ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-4391) On Windows, Kafka server stops with uncaught exception after coming back from sleep

2017-08-29 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-4391:
-
Labels: windows  (was: )

> On Windows, Kafka server stops with uncaught exception after coming back from 
> sleep
> ---
>
> Key: KAFKA-4391
> URL: https://issues.apache.org/jira/browse/KAFKA-4391
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.1
> Environment: Windows 10, jdk1.8.0_111
>Reporter: Yiquan Zhou
>  Labels: windows
>
> Steps to reproduce:
> 1. start the zookeeper
> $ bin\windows\zookeeper-server-start.bat config/zookeeper.properties
> 2. start the Kafka server with the default properties
> $ bin\windows\kafka-server-start.bat config/server.properties
> 3. put Windows into sleep mode for 1-2 hours
> 4. activate Windows again, an exception occurs in Kafka server console and 
> the server is stopped:
> {code:title=kafka console log}
> [2016-11-08 21:45:35,185] INFO Client session timed out, have not heard from 
> server in 10081379ms for sessionid 0x1584514da47, closing socket 
> connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
> [2016-11-08 21:45:40,698] INFO zookeeper state changed (Disconnected) 
> (org.I0Itec.zkclient.ZkClient)
> [2016-11-08 21:45:43,029] INFO Opening socket connection to server 
> 127.0.0.1/127.0.0.1:2181. Will not attempt to authenticate using SASL 
> (unknown error) (org.apache.zookeeper.ClientCnxn)
> [2016-11-08 21:45:43,044] INFO Socket connection established to 
> 127.0.0.1/127.0.0.1:2181, initiating session (org.apache.zookeeper.ClientCnxn)
> [2016-11-08 21:45:43,158] INFO Unable to reconnect to ZooKeeper service, 
> session 0x1584514da47 has expired, closing socket connection 
> (org.apache.zookeeper.ClientCnxn)
> [2016-11-08 21:45:43,158] INFO zookeeper state changed (Expired) 
> (org.I0Itec.zkclient.ZkClient)
> [2016-11-08 21:45:43,236] INFO Initiating client connection, 
> connectString=localhost:2181 sessionTimeout=6000 
> watcher=org.I0Itec.zkclient.ZkClient@11ca437b (org.apache.zookeeper.ZooKeeper)
> [2016-11-08 21:45:43,280] INFO EventThread shut down 
> (org.apache.zookeeper.ClientCnxn)
> log4j:ERROR Failed to rename [/controller.log] to 
> [/controller.log.2016-11-08-18].
> [2016-11-08 21:45:43,421] INFO Opening socket connection to server 
> 127.0.0.1/127.0.0.1:2181. Will not attempt to authenticate using SASL 
> (unknown error) (org.apache.zookeeper.ClientCnxn)
> [2016-11-08 21:45:43,483] INFO Socket connection established to 
> 127.0.0.1/127.0.0.1:2181, initiating session (org.apache.zookeeper.ClientCnxn)
> [2016-11-08 21:45:43,811] INFO Session establishment complete on server 
> 127.0.0.1/127.0.0.1:2181, sessionid = 0x1584514da470001, negotiated timeout = 
> 6000 (org.apache.zookeeper.ClientCnxn)
> [2016-11-08 21:45:43,827] INFO zookeeper state changed (SyncConnected) 
> (org.I0Itec.zkclient.ZkClient)
> log4j:ERROR Failed to rename [/server.log] to [/server.log.2016-11-08-18].
> [2016-11-08 21:45:43,827] INFO Creating /controller (is it secure? false) 
> (kafka.utils.ZKCheckedEphemeral)
> [2016-11-08 21:45:44,014] INFO Result of znode creation is: OK 
> (kafka.utils.ZKCheckedEphemeral)
> [2016-11-08 21:45:44,014] INFO 0 successfully elected as leader 
> (kafka.server.ZookeeperLeaderElector)
> log4j:ERROR Failed to rename [/state-change.log] to 
> [/state-change.log.2016-11-08-18].
> [2016-11-08 21:45:44,421] INFO re-registering broker info in ZK for broker 0 
> (kafka.server.KafkaHealthcheck)
> [2016-11-08 21:45:44,436] INFO Creating /brokers/ids/0 (is it secure? false) 
> (kafka.utils.ZKCheckedEphemeral)
> [2016-11-08 21:45:44,686] INFO Result of znode creation is: OK 
> (kafka.utils.ZKCheckedEphemeral)
> [2016-11-08 21:45:44,686] INFO Registered broker 0 at path /brokers/ids/0 
> with addresses: PLAINTEXT -> EndPoint(192.168.0.15,9092,PLAINTEXT) 
> (kafka.utils.ZkUtils)
> [2016-11-08 21:45:44,686] INFO done re-registering broker 
> (kafka.server.KafkaHealthcheck)
> [2016-11-08 21:45:44,686] INFO Subscribing to /brokers/topics path to watch 
> for new topics (kafka.server.KafkaHealthcheck)
> [2016-11-08 21:45:45,046] INFO [ReplicaFetcherManager on broker 0] Removed 
> fetcher for partitions [test,0] (kafka.server.ReplicaFetcherManager)
> [2016-11-08 21:45:45,061] INFO New leader is 0 
> (kafka.server.ZookeeperLeaderElector$LeaderChangeListener)
> [2016-11-08 21:45:47,325] ERROR Uncaught exception in scheduled task 
> 'kafka-recovery-point-checkpoint' (kafka.utils.KafkaScheduler)
> java.io.IOException: File rename from 
> D:\tmp\kafka-logs\recovery-point-offset-checkpoint.tmp to 
> D:\tmp\kafka-logs\recovery-point-offset-checkpoint failed.
> at kafka.server.OffsetCheckpoint.write(OffsetCheckpoint.scala:66)
> at 
> 

[jira] [Resolved] (KAFKA-2173) Kafka died after throw more error

2017-08-29 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2173.
--
Resolution: Cannot Reproduce

 Might have fixed in latest versions. Pl reopen if you think the issue still 
exists


> Kafka died after throw more error
> -
>
> Key: KAFKA-2173
> URL: https://issues.apache.org/jira/browse/KAFKA-2173
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.1
> Environment: VPS Server CentOs 6.6 4G Ram
>Reporter: Truyet Nguyen
>
> Kafka is died after server.log throw more error: 
> [2015-05-05 16:08:34,616] ERROR Closing socket for /127.0.0.1 because of 
> error (kafka.network.Processor)
> java.io.IOException: Broken pipe
>   at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
>   at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
>   at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
>   at sun.nio.ch.IOUtil.write(IOUtil.java:65)
>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:470)
>   at kafka.api.TopicDataSend.writeTo(FetchResponse.scala:123)
>   at kafka.network.MultiSend.writeTo(Transmission.scala:101)
>   at kafka.api.FetchResponseSend.writeTo(FetchResponse.scala:231)
>   at kafka.network.Processor.write(SocketServer.scala:472)
>   at kafka.network.Processor.run(SocketServer.scala:342)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-1636) High CPU in very active environment

2017-08-29 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1636.
--
Resolution: Won't Fix

ConsumerIterator waits for the data from the underlying stream. Pl reopen if 
you think the issue still exists


> High CPU in very active environment
> ---
>
> Key: KAFKA-1636
> URL: https://issues.apache.org/jira/browse/KAFKA-1636
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.8.0
> Environment: Redhat 64 bit
>  2.6.32-431.23.3.el6.x86_64 #1 SMP Wed Jul 16 06:12:23 EDT 2014 x86_64 x86_64 
> x86_64 GNU/Linux
>Reporter: Laurie Turner
>Assignee: Neha Narkhede
>
> Found the same issue on StackOverFlow below:
> http://stackoverflow.com/questions/22983435/kafka-consumer-threads-are-in-waiting-state-and-lag-is-getting-increased
> This is a very busy environment and the majority of the CPU seems to be busy 
> in the in the await method. 
> 3XMTHREADINFO3   Java callstack:
> 4XESTACKTRACEat sun/misc/Unsafe.park(Native Method)
> 4XESTACKTRACEat 
> java/util/concurrent/locks/LockSupport.parkNanos(LockSupport.java:237(Compiled
>  Code))
> 4XESTACKTRACEat 
> java/util/concurrent/locks/AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2093(Compiled
>  Code))
> 4XESTACKTRACEat 
> java/util/concurrent/LinkedBlockingQueue.poll(LinkedBlockingQueue.java:478(Compiled
>  Code))
> 4XESTACKTRACEat 
> kafka/consumer/ConsumerIterator.makeNext(ConsumerIterator.scala:65(Compiled 
> Code))
> 4XESTACKTRACEat 
> kafka/consumer/ConsumerIterator.makeNext(ConsumerIterator.scala:33(Compiled 
> Code))
> 4XESTACKTRACEat 
> kafka/utils/IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:61(Compiled
>  Code))
> 4XESTACKTRACEat 
> kafka/utils/IteratorTemplate.hasNext(IteratorTemplate.scala:53(Compiled Code))



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-1632) No such method error on KafkaStream.head

2017-08-30 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1632.
--
Resolution: Cannot Reproduce

 Mostly related to Kafka version mismatch. Pl reopen if you think the issue 
still exists


> No such method error on KafkaStream.head
> 
>
> Key: KAFKA-1632
> URL: https://issues.apache.org/jira/browse/KAFKA-1632
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.0
>Reporter: aarti gupta
>
> \The following error is thrown, (when I call KafkaStream.head(), as shown in 
> the code snippet below)
>  WARN -  java.lang.NoSuchMethodError: 
> kafka.consumer.KafkaStream.head()Lkafka/message/MessageAndMetadata;
> My use case, is that I want to block on the receive() method, and when a 
> message is published on the topic, I 'return the head' of the queue to the 
> calling method, that processes it.
> I do not use partitioning and have a single stream.
> import com.google.common.collect.Maps;
> import x.x.x.Task;
> import kafka.consumer.ConsumerConfig;
> import kafka.consumer.KafkaStream;
> import kafka.javaapi.consumer.ConsumerConnector;
> import kafka.javaapi.consumer.ZookeeperConsumerConnector;
> import kafka.message.MessageAndMetadata;
> import org.codehaus.jettison.json.JSONException;
> import org.codehaus.jettison.json.JSONObject;
> import org.slf4j.Logger;
> import org.slf4j.LoggerFactory;
> import java.io.IOException;
> import java.util.List;
> import java.util.Map;
> import java.util.Properties;
> import java.util.concurrent.Callable;
> import java.util.concurrent.Executors;
> /**
>  * @author agupta
>  */
> public class KafkaConsumerDelegate implements ConsumerDelegate {
> private ConsumerConnector consumerConnector;
> private String topicName;
> private static Logger LOG = 
> LoggerFactory.getLogger(KafkaProducerDelegate.class.getName());
> private final Map topicCount = Maps.newHashMap();
> private Map>> messageStreams;
> private List> kafkaStreams;
> @Override
> public Task receive(final boolean consumerConfirms) {
> try {
> LOG.info("Kafka consumer delegate listening on topic " + 
> getTopicName());
> kafkaStreams = messageStreams.get(getTopicName());
> final KafkaStream kafkaStream = 
> kafkaStreams.get(0);
> return Executors.newSingleThreadExecutor().submit(new 
> Callable() {
> @Override
> public Task call() throws Exception {
>  final MessageAndMetadata 
> messageAndMetadata= kafkaStream.head();
> final Task message = new Task() {
> @Override
> public byte[] getBytes() {
> return messageAndMetadata.message();
> }
> };
> return message;
> }
> }).get();
> } catch (Exception e) {
> LOG.warn("Error in consumer " + e.getMessage());
> }
> return null;
> }
> @Override
> public void initialize(JSONObject configData, boolean publisherAckMode) 
> throws IOException {
> try {
> this.topicName = configData.getString("topicName");
> LOG.info("Topic name is " + topicName);
> } catch (JSONException e) {
> e.printStackTrace();
> LOG.error("Error parsing configuration", e);
> }
> Properties properties = new Properties();
> properties.put("zookeeper.connect", "localhost:2181");
> properties.put("group.id", "testgroup");
> ConsumerConfig consumerConfig = new ConsumerConfig(properties);
> //only one stream, and one topic, (Since we are not supporting 
> partitioning)
> topicCount.put(getTopicName(), 1);
> consumerConnector = new ZookeeperConsumerConnector(consumerConfig);
> messageStreams = consumerConnector.createMessageStreams(topicCount);
> }
> @Override
> public void stop() throws IOException {
> //TODO
> throw new UnsupportedOperationException("Method Not Implemented");
> }
> public String getTopicName() {
> return this.topicName;
> }
> }
> Lastly, I am using the following binary 
> kafka_2.8.0-0.8.1.1  
> and the following maven dependency
>   
> org.apache.kafka
> kafka_2.10
> 0.8.1.1
> 
> Any suggestions?
> Thanks
> aarti



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Reopened] (KAFKA-1980) Console consumer throws OutOfMemoryError with large max-messages

2017-08-30 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reopened KAFKA-1980:
--

> Console consumer throws OutOfMemoryError with large max-messages
> 
>
> Key: KAFKA-1980
> URL: https://issues.apache.org/jira/browse/KAFKA-1980
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.8.1.1, 0.8.2.0
>Reporter: Håkon Hitland
>Priority: Minor
> Attachments: kafka-1980.patch
>
>
> Tested on kafka_2.11-0.8.2.0
> Steps to reproduce:
> - Have any topic with at least 1 GB of data.
> - Use kafka-console-consumer.sh on the topic passing a large number to 
> --max-messages, e.g.:
> $ bin/kafka-console-consumer.sh --zookeeper localhost --topic test.large 
> --from-beginning --max-messages  | head -n 40
> Expected result:
> Should stream messages up to max-messages
> Result:
> Out of memory error:
> [2015-02-23 19:41:35,006] ERROR OOME with size 1048618 
> (kafka.network.BoundedByteBufferReceive)
> java.lang.OutOfMemoryError: Java heap space
>   at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
>   at java.nio.ByteBuffer.allocate(ByteBuffer.java:331)
>   at 
> kafka.network.BoundedByteBufferReceive.byteBufferAllocate(BoundedByteBufferReceive.scala:80)
>   at 
> kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:63)
>   at kafka.network.Receive$class.readCompletely(Transmission.scala:56)
>   at 
> kafka.network.BoundedByteBufferReceive.readCompletely(BoundedByteBufferReceive.scala:29)
>   at kafka.network.BlockingChannel.receive(BlockingChannel.scala:111)
>   at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:71)
>   at 
> kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:68)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SimpleConsumer.scala:112)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112)
>   at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply$mcV$sp(SimpleConsumer.scala:111)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111)
>   at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
>   at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:110)
>   at 
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:94)
>   at 
> kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:86)
>   at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
> As a first guess I'd say that this is caused by slice() taking more memory 
> than expected. Perhaps because it is called on an Iterable and not an 
> Iterator?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-888) problems when shutting down the java consumer .

2017-08-30 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-888.
-
Resolution: Cannot Reproduce

 Pl reopen if you think the issue still exists


> problems when shutting down the java consumer .
> ---
>
> Key: KAFKA-888
> URL: https://issues.apache.org/jira/browse/KAFKA-888
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.8.0
> Environment: Linux kacper-pc 3.2.0-40-generic #64-Ubuntu SMP 2013 
> x86_64 x86_64 x86_64 GNU/Linux, scala 2.9.2 
>Reporter: kacper chwialkowski
>Assignee: Neha Narkhede
>Priority: Minor
>  Labels: bug, consumer, exception
>
> I got the following error when shutting down the consumer :
> ConsumerFetcherThread-test-consumer-group_kacper-pc-1367268338957-12cb5a0b-0-0]
>  INFO  kafka.consumer.SimpleConsumer - Reconnect due to socket error: 
> java.nio.channels.ClosedByInterruptException: null
>   at 
> java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
>  ~[na:1.7.0_21]
>   at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:386) 
> ~[na:1.7.0_21]
>   at 
> sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:220) 
> ~[na:1.7.0_21]
>   at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:103) 
> ~[na:1.7.0_21]
>   at 
> java.nio.channels.Channels$ReadableByteChannelImpl.read(Channels.java:385) 
> ~[na:1.7.0_21]
>   at kafka.utils.Utils$.read(Utils.scala:394) 
> ~[kafka_2.9.2-0.8.0-SNAPSHOT.jar:0.8.0-SNAPSHOT]
>   at 
> kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:54)
>  ~[kafka_2.9.2-0.8.0-SNAPSHOT.jar:0.8.0-SNAPSHOT]
>   at kafka.network.Receive$class.readCompletely(Transmission.scala:56) 
> ~[kafka_2.9.2-0.8.0-SNAPSHOT.jar:0.8.0-SNAPSHOT]
>   at 
> kafka.network.BoundedByteBufferReceive.readCompletely(BoundedByteBufferReceive.scala:29)
>  ~[kafka_2.9.2-0.8.0-SNAPSHOT.jar:0.8.0-SNAPSHOT]
>   at kafka.network.BlockingChannel.receive(BlockingChannel.scala:100) 
> ~[kafka_2.9.2-0.8.0-SNAPSHOT.jar:0.8.0-SNAPSHOT]
>   at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:75) 
> [kafka_2.9.2-0.8.0-SNAPSHOT.jar:0.8.0-SNAPSHOT]
>   at 
> kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:73)
>  [kafka_2.9.2-0.8.0-SNAPSHOT.jar:0.8.0-SNAPSHOT]
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SimpleConsumer.scala:112)
>  [kafka_2.9.2-0.8.0-SNAPSHOT.jar:0.8.0-SNAPSHOT]
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112)
>  [kafka_2.9.2-0.8.0-SNAPSHOT.jar:0.8.0-SNAPSHOT]
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112)
>  [kafka_2.9.2-0.8.0-SNAPSHOT.jar:0.8.0-SNAPSHOT]
>   at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33) 
> [kafka_2.9.2-0.8.0-SNAPSHOT.jar:0.8.0-SNAPSHOT]
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply$mcV$sp(SimpleConsumer.scala:111)
>  [kafka_2.9.2-0.8.0-SNAPSHOT.jar:0.8.0-SNAPSHOT]
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111)
>  [kafka_2.9.2-0.8.0-SNAPSHOT.jar:0.8.0-SNAPSHOT]
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111)
>  [kafka_2.9.2-0.8.0-SNAPSHOT.jar:0.8.0-SNAPSHOT]
>   at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33) 
> [kafka_2.9.2-0.8.0-SNAPSHOT.jar:0.8.0-SNAPSHOT]
>   at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:110) 
> [kafka_2.9.2-0.8.0-SNAPSHOT.jar:0.8.0-SNAPSHOT]
>   at 
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:96)
>  [kafka_2.9.2-0.8.0-SNAPSHOT.jar:0.8.0-SNAPSHOT]
>   at 
> kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:88) 
> [kafka_2.9.2-0.8.0-SNAPSHOT.jar:0.8.0-SNAPSHOT]
>   at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:51) 
> [kafka_2.9.2-0.8.0-SNAPSHOT.jar:0.8.0-SNAPSHOT]
> and this is how I create my Consumer 
>   public Boolean call() throws Exception {
> Map topicCountMap = new HashMap<>();
> topicCountMap.put(topic, new Integer(1));
> Map>> consumerMap = 
> consumer.createMessageStreams(topicCountMap);
> KafkaStream stream = 
> consumerMap.get(topic).get(0);
> ConsumerIterator it = stream.iterator();
> it.next();
> LOGGER.info("Received the message. Shutting down");
> consumer.commitOffsets();
> 

[jira] [Resolved] (KAFKA-1463) producer fails with scala.tuple error

2017-08-30 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1463.
--
Resolution: Won't Fix

 Pl reopen if you think the issue still exists


> producer fails with scala.tuple error
> -
>
> Key: KAFKA-1463
> URL: https://issues.apache.org/jira/browse/KAFKA-1463
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.0
> Environment: java, springsource
>Reporter: Joe
>Assignee: Jun Rao
>
> Running on a windows machine trying to debug first kafka program. The program 
> fails on the following line:
> producer = new kafka.javaapi.producer.Producer(
>   new ProducerConfig(props)); 
> ERROR:
> Exception in thread "main" java.lang.VerifyError: class scala.Tuple2$mcLL$sp 
> overrides final method _1.()Ljava/lang/Object;
>   at java.lang.ClassLoader.defineClass1(Native Method)
>   at java.lang.ClassLoader.defineClass(ClassLoader.java:791)
>   at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)...
> unable to find solution online.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-1980) Console consumer throws OutOfMemoryError with large max-messages

2017-08-30 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16147595#comment-16147595
 ] 

Manikumar commented on KAFKA-1980:
--

[~ndimiduk] Apologies for not giving a proper explanation. Problematic code 
exists in ReplayLogProducer. ReplayLogProducer rarely used tool and it uses 
deprecated older consumer API. This tool may get deprecated or may get updated 
to new API in KAFKA-5523.

> Console consumer throws OutOfMemoryError with large max-messages
> 
>
> Key: KAFKA-1980
> URL: https://issues.apache.org/jira/browse/KAFKA-1980
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.8.1.1, 0.8.2.0
>Reporter: Håkon Hitland
>Priority: Minor
> Attachments: kafka-1980.patch
>
>
> Tested on kafka_2.11-0.8.2.0
> Steps to reproduce:
> - Have any topic with at least 1 GB of data.
> - Use kafka-console-consumer.sh on the topic passing a large number to 
> --max-messages, e.g.:
> $ bin/kafka-console-consumer.sh --zookeeper localhost --topic test.large 
> --from-beginning --max-messages  | head -n 40
> Expected result:
> Should stream messages up to max-messages
> Result:
> Out of memory error:
> [2015-02-23 19:41:35,006] ERROR OOME with size 1048618 
> (kafka.network.BoundedByteBufferReceive)
> java.lang.OutOfMemoryError: Java heap space
>   at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
>   at java.nio.ByteBuffer.allocate(ByteBuffer.java:331)
>   at 
> kafka.network.BoundedByteBufferReceive.byteBufferAllocate(BoundedByteBufferReceive.scala:80)
>   at 
> kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:63)
>   at kafka.network.Receive$class.readCompletely(Transmission.scala:56)
>   at 
> kafka.network.BoundedByteBufferReceive.readCompletely(BoundedByteBufferReceive.scala:29)
>   at kafka.network.BlockingChannel.receive(BlockingChannel.scala:111)
>   at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:71)
>   at 
> kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:68)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SimpleConsumer.scala:112)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112)
>   at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply$mcV$sp(SimpleConsumer.scala:111)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111)
>   at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
>   at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:110)
>   at 
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:94)
>   at 
> kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:86)
>   at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
> As a first guess I'd say that this is caused by slice() taking more memory 
> than expected. Perhaps because it is called on an Iterable and not an 
> Iterator?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-1980) Console consumer throws OutOfMemoryError with large max-messages

2017-08-30 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1980.
--
Resolution: Won't Fix

[~ndimiduk] Agree. Updated the JIRA.

> Console consumer throws OutOfMemoryError with large max-messages
> 
>
> Key: KAFKA-1980
> URL: https://issues.apache.org/jira/browse/KAFKA-1980
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.8.1.1, 0.8.2.0
>Reporter: Håkon Hitland
>Priority: Minor
> Attachments: kafka-1980.patch
>
>
> Tested on kafka_2.11-0.8.2.0
> Steps to reproduce:
> - Have any topic with at least 1 GB of data.
> - Use kafka-console-consumer.sh on the topic passing a large number to 
> --max-messages, e.g.:
> $ bin/kafka-console-consumer.sh --zookeeper localhost --topic test.large 
> --from-beginning --max-messages  | head -n 40
> Expected result:
> Should stream messages up to max-messages
> Result:
> Out of memory error:
> [2015-02-23 19:41:35,006] ERROR OOME with size 1048618 
> (kafka.network.BoundedByteBufferReceive)
> java.lang.OutOfMemoryError: Java heap space
>   at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
>   at java.nio.ByteBuffer.allocate(ByteBuffer.java:331)
>   at 
> kafka.network.BoundedByteBufferReceive.byteBufferAllocate(BoundedByteBufferReceive.scala:80)
>   at 
> kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:63)
>   at kafka.network.Receive$class.readCompletely(Transmission.scala:56)
>   at 
> kafka.network.BoundedByteBufferReceive.readCompletely(BoundedByteBufferReceive.scala:29)
>   at kafka.network.BlockingChannel.receive(BlockingChannel.scala:111)
>   at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:71)
>   at 
> kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:68)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SimpleConsumer.scala:112)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112)
>   at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply$mcV$sp(SimpleConsumer.scala:111)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111)
>   at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
>   at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:110)
>   at 
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:94)
>   at 
> kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:86)
>   at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
> As a first guess I'd say that this is caused by slice() taking more memory 
> than expected. Perhaps because it is called on an Iterable and not an 
> Iterator?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4297) Cannot Stop Kafka with Shell Script

2017-08-30 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4297.
--
Resolution: Duplicate

Closing this as there is a latest PR for KAFKA-4931.

> Cannot Stop Kafka with Shell Script
> ---
>
> Key: KAFKA-4297
> URL: https://issues.apache.org/jira/browse/KAFKA-4297
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.1
> Environment: CentOS 6.7
>Reporter: Mabin Jeong
>Assignee: Tom Bentley
>Priority: Critical
>  Labels: easyfix
>
> If Kafka's homepath is long, kafka cannot stop with 'kafka-server-stop.sh'.
> That command showed this message:
> ```
> No kafka server to stop
> ```
> This bug is caused that command line is too long like this.
> ```
> /home/bbdev/Amasser/etc/alternatives/jre/bin/java -Xms1G -Xmx5G -server 
> -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 
> -XX:+DisableExplicitGC -Djava.awt.headless=true 
> -Xloggc:/home/bbdev/Amasser/var/log/kafka/kafkaServer-gc.log -verbose:gc 
> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
> -Dcom.sun.management.jmxremote 
> -Dcom.sun.management.jmxremote.authenticate=false 
> -Dcom.sun.management.jmxremote.ssl=false 
> -Dkafka.logs.dir=/home/bbdev/Amasser/var/log/kafka 
> -Dlog4j.configuration=file:/home/bbdev/Amasser/etc/alternatives/kafka/bin/../config/log4j.properties
>  -cp 
> 

[jira] [Resolved] (KAFKA-270) sync producer / consumer test producing lot of kafka server exceptions & not getting the throughput mentioned here http://incubator.apache.org/kafka/performance.html

2017-08-30 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-270.
-
Resolution: Won't Fix

Closing due to inactivity. Pl reopen if you think the issue still exists


>  sync producer / consumer test producing lot of kafka server exceptions & not 
> getting the throughput mentioned here 
> http://incubator.apache.org/kafka/performance.html
> --
>
> Key: KAFKA-270
> URL: https://issues.apache.org/jira/browse/KAFKA-270
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, core
>Affects Versions: 0.7
> Environment: Linux 2.6.18-238.1.1.el5 , x86_64 x86_64 x86_64 
> GNU/Linux 
> ext3 file system with raid10
>Reporter: Praveen Ramachandra
>  Labels: clients, core, newdev, performance
>
> I am getting ridiculously low producer and consumer throughput.
> I am using default config values for producer, consumer and broker
> which are very good starting points, as they should yield sufficient
> throughput.
> Appreciate if you point what settings/changes-in-code needs to be done
> to get higher throughput.
> I changed num.partitions in the server.config to 10.
> Please look below for exception and error messages from the server
> BTW: I am running server, zookeeper, producer, consumer on the same host.
> Consumer Code=
>long startTime = System.currentTimeMillis();
>long endTime = startTime + runDuration*1000l;
>Properties props = new Properties();
>props.put("zk.connect", "localhost:2181");
>props.put("groupid", subscriptionName); // to support multiple
> subscribers
>props.put("zk.sessiontimeout.ms", "400");
>props.put("zk.synctime.ms", "200");
>props.put("autocommit.interval.ms", "1000");
>consConfig =  new ConsumerConfig(props);
>consumer =
> kafka.consumer.Consumer.createJavaConsumerConnector(consConfig);
>Map topicCountMap = new HashMap();
>topicCountMap.put(topicName, new Integer(1)); // has the topic
> to which to subscribe to
>Map> consumerMap =
> consumer.createMessageStreams(topicCountMap);
>KafkaMessageStream stream =  
> consumerMap.get(topicName).get(0);
>ConsumerIterator it = stream.iterator();
>while(System.currentTimeMillis() <= endTime )
>{
>it.next(); // discard data
>consumeMsgCount.incrementAndGet();
>}
> End consumer CODE
> =Producer CODE
>props.put("serializer.class", "kafka.serializer.StringEncoder");
>props.put("zk.connect", "localhost:2181");
>// Use random partitioner. Don't need the key type. Just
> set it to Integer.
>// The message is of type String.
>producer = new kafka.javaapi.producer.Producer String>(new ProducerConfig(props));
>long endTime = startTime + runDuration*1000l; // run duration
> is in seconds
>while(System.currentTimeMillis() <= endTime )
>{
>String msg =
> org.apache.commons.lang.RandomStringUtils.random(msgSizes.get(0));
>producer.send(new ProducerData(topicName, msg));
>pc.incrementAndGet();
>}
>java.util.Date date = new java.util.Date(System.currentTimeMillis());
>System.out.println(date+" :: stopped producer for topic"+topicName);
> =END Producer CODE
> I see a bunch of exceptions like this
> [2012-02-11 02:44:11,945] ERROR Closing socket for /188.125.88.145 because of 
> error (kafka.network.Processor)
> java.io.IOException: Connection reset by peer
>   at sun.nio.ch.FileChannelImpl.transferTo0(Native Method)
>   at 
> sun.nio.ch.FileChannelImpl.transferToDirectly(FileChannelImpl.java:405)
>   at sun.nio.ch.FileChannelImpl.transferTo(FileChannelImpl.java:506)
>   at kafka.message.FileMessageSet.writeTo(FileMessageSet.scala:107)
>   at kafka.server.MessageSetSend.writeTo(MessageSetSend.scala:51)
>   at kafka.network.MultiSend.writeTo(Transmission.scala:95)
>   at kafka.network.Processor.write(SocketServer.scala:332)
>   at kafka.network.Processor.run(SocketServer.scala:209)
>   at java.lang.Thread.run(Thread.java:662)
> java.io.IOException: Connection reset by peer
>   at sun.nio.ch.FileDispatcher.read0(Native Method)
>   at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:21)
>   at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:198)
>   at sun.nio.ch.IOUtil.read(IOUtil.java:171)
>   at 

[jira] [Resolved] (KAFKA-1492) Getting error when sending producer request at the broker end with a single broker

2017-08-30 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1492.
--
Resolution: Cannot Reproduce

 Pl reopen if you think the issue still exists


> Getting error when sending producer request at the broker end with a single 
> broker
> --
>
> Key: KAFKA-1492
> URL: https://issues.apache.org/jira/browse/KAFKA-1492
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.1.1
>Reporter: sriram
>Assignee: Jun Rao
>
> Tried to run a simple example by sending a message to a single broker . 
> Getting error 
> [2014-06-13 08:35:45,402] INFO Closing socket connection to /127.0.0.1. 
> (kafka.network.Processor)
> [2014-06-13 08:35:45,440] WARN [KafkaApi-1] Produce request with correlation 
> id 2 from client  on partition [samsung,0] failed due to Leader not local for 
> partition [samsung,0] on broker 1 (kafka.server.KafkaApis)
> [2014-06-13 08:35:45,440] INFO [KafkaApi-1] Send the close connection 
> response due to error handling produce request [clientId = , correlationId = 
> 2, topicAndPartition = [samsung,0]] with Ack=0 (kafka.server.KafkaApis)
> OS- Windows 7 , JDK 1.7 , Scala 2.10



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4520) Kafka broker fails with not so user-friendly error msg when log.dirs is not set

2017-08-30 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4520.
--
   Resolution: Fixed
Fix Version/s: 0.11.0.0

> Kafka broker fails with not so user-friendly error msg when log.dirs is not 
> set
> ---
>
> Key: KAFKA-4520
> URL: https://issues.apache.org/jira/browse/KAFKA-4520
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Buchi Reddy B
>Priority: Trivial
> Fix For: 0.11.0.0
>
>
> I tried to bring up a Kafka broker without setting log.dirs property and it 
> has failed with the following error.
> {code:java}
> [2016-12-07 23:41:08,020] INFO KafkaConfig values:
>  advertised.host.name = 100.96.7.10
>  advertised.listeners = null
>  advertised.port = null
>  authorizer.class.name =
>  auto.create.topics.enable = true
>  auto.leader.rebalance.enable = true
>  background.threads = 10
>  broker.id = 0
>  broker.id.generation.enable = false
>  broker.rack = null
>  compression.type = producer
>  connections.max.idle.ms = 60
>  controlled.shutdown.enable = true
>  controlled.shutdown.max.retries = 3
>  controlled.shutdown.retry.backoff.ms = 5000
>  controller.socket.timeout.ms = 3
>  default.replication.factor = 1
>  delete.topic.enable = false
>  fetch.purgatory.purge.interval.requests = 1000
>  group.max.session.timeout.ms = 30
>  group.min.session.timeout.ms = 6000
>  host.name =
>  inter.broker.protocol.version = 0.10.1-IV2
>  leader.imbalance.check.interval.seconds = 300
>  leader.imbalance.per.broker.percentage = 10
>  listeners = PLAINTEXT://0.0.0.0:9092
>  log.cleaner.backoff.ms = 15000
>  log.cleaner.dedupe.buffer.size = 134217728
>  log.cleaner.delete.retention.ms = 8640
>  log.cleaner.enable = true
>  log.cleaner.io.buffer.load.factor = 0.9
>  log.cleaner.io.buffer.size = 524288
>  log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
>  log.cleaner.min.cleanable.ratio = 0.5
>  log.cleaner.min.compaction.lag.ms = 0
>  log.cleaner.threads = 1
>  log.cleanup.policy = [delete]
>  log.dir = /tmp/kafka-logs
>  log.dirs =
>  log.flush.interval.messages = 9223372036854775807
>  log.flush.interval.ms = null
>  log.flush.offset.checkpoint.interval.ms = 6
>  log.flush.scheduler.interval.ms = 9223372036854775807
>  log.index.interval.bytes = 4096
>  log.index.size.max.bytes = 10485760
>  log.message.format.version = 0.10.1-IV2
>  log.message.timestamp.difference.max.ms = 9223372036854775807
>  log.message.timestamp.type = CreateTime
>  log.preallocate = false
>  log.retention.bytes = -1
>  log.retention.check.interval.ms = 30
>  log.retention.hours = 168
>  log.retention.minutes = null
>  log.retention.ms = null
>  log.roll.hours = 168
>  log.roll.jitter.hours = 0
>  log.roll.jitter.ms = null
>  log.roll.ms = null
>  log.segment.bytes = 1073741824
>  log.segment.delete.delay.ms = 6
>  max.connections.per.ip = 2147483647
>  max.connections.per.ip.overrides =
>  message.max.bytes = 112
>  metric.reporters = []
>  metrics.num.samples = 2
>  metrics.sample.window.ms = 3
>  min.insync.replicas = 1
>  num.io.threads = 8
>  num.network.threads = 3
>  num.partitions = 1
>  num.recovery.threads.per.data.dir = 1
>  num.replica.fetchers = 1
>  offset.metadata.max.bytes = 4096
>  offsets.commit.required.acks = -1
>  offsets.commit.timeout.ms = 5000
>  offsets.load.buffer.size = 5242880
>  offsets.retention.check.interval.ms = 60
>  offsets.retention.minutes = 1440
>  offsets.topic.compression.codec = 0
>  offsets.topic.num.partitions = 50
>  offsets.topic.replication.factor = 3
>  offsets.topic.segment.bytes = 104857600
>  port = 9092
>  principal.builder.class = class 
> org.apache.kafka.common.security.auth.DefaultPrincipalBuilder
>  producer.purgatory.purge.interval.requests = 1000
>  queued.max.requests = 500
>  quota.consumer.default = 9223372036854775807
>  quota.producer.default = 9223372036854775807
>  quota.window.num = 11
>  quota.window.size.seconds = 1
>  replica.fetch.backoff.ms = 1000
>  replica.fetch.max.bytes = 1048576
>  replica.fetch.min.bytes = 1
>  replica.fetch.response.max.bytes = 10485760
>  replica.fetch.wait.max.ms = 500
>  replica.high.watermark.checkpoint.interval.ms = 5000
>  replica.lag.time.max.ms = 1
>  replica.socket.receive.buffer.bytes = 65536
>  replica.socket.timeout.ms = 3
>  replication.quota.window.num = 11
>  

[jira] [Resolved] (KAFKA-1980) Console consumer throws OutOfMemoryError with large max-messages

2017-08-29 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1980.
--
Resolution: Fixed

Closing as per comments.  Pl reopen if you think the issue still exists


> Console consumer throws OutOfMemoryError with large max-messages
> 
>
> Key: KAFKA-1980
> URL: https://issues.apache.org/jira/browse/KAFKA-1980
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.8.1.1, 0.8.2.0
>Reporter: Håkon Hitland
>Priority: Minor
> Attachments: kafka-1980.patch
>
>
> Tested on kafka_2.11-0.8.2.0
> Steps to reproduce:
> - Have any topic with at least 1 GB of data.
> - Use kafka-console-consumer.sh on the topic passing a large number to 
> --max-messages, e.g.:
> $ bin/kafka-console-consumer.sh --zookeeper localhost --topic test.large 
> --from-beginning --max-messages  | head -n 40
> Expected result:
> Should stream messages up to max-messages
> Result:
> Out of memory error:
> [2015-02-23 19:41:35,006] ERROR OOME with size 1048618 
> (kafka.network.BoundedByteBufferReceive)
> java.lang.OutOfMemoryError: Java heap space
>   at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
>   at java.nio.ByteBuffer.allocate(ByteBuffer.java:331)
>   at 
> kafka.network.BoundedByteBufferReceive.byteBufferAllocate(BoundedByteBufferReceive.scala:80)
>   at 
> kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:63)
>   at kafka.network.Receive$class.readCompletely(Transmission.scala:56)
>   at 
> kafka.network.BoundedByteBufferReceive.readCompletely(BoundedByteBufferReceive.scala:29)
>   at kafka.network.BlockingChannel.receive(BlockingChannel.scala:111)
>   at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:71)
>   at 
> kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:68)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SimpleConsumer.scala:112)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112)
>   at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply$mcV$sp(SimpleConsumer.scala:111)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111)
>   at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
>   at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:110)
>   at 
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:94)
>   at 
> kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:86)
>   at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
> As a first guess I'd say that this is caused by slice() taking more memory 
> than expected. Perhaps because it is called on an Iterable and not an 
> Iterator?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


  1   2   3   4   5   6   7   8   9   10   >