Kafka 0.11.0.1 and filebeat 6.1 compatibility

2018-02-09 Thread Sandeep Sarkar
Hi All,

 

I am using filebeat 6.1 and kafka 0.11.0.1 to push logs. From filebeat logs I 
could see that communication is getting established but then logs are not 
getting pushed.

Also when I updated filebeat's output.kafka.version to 0.11.0.1 I get error 
message :

 

2018/02/08 09:41:15.417691 beat.go:635: CRIT Exiting: error initializing 
publisher:  unknown/unsupported kafka version '0.11.0.1' accessing 
'output.kafka' (source:'/etc/filebeat/filebeat.yml')

Exiting: error initializing publisher: unknown/unsupported kafka version 
'0.11.0.1' accessing 'output.kafka' (source:'/etc/filebeat/filebeat.yml')

 

So I used kafka 0.10.2.0 and updated filebeat's output.kafka.version to 
0.10.2.0 and the error message is gone but I do not get any logs in kafka. 
Filebeate throws 

Kafka publish failed with: circuit breaker is open

 

And after some time it throws below message in a loop:

2018/02/09 15:08:13.075932 client.go:234: DBG [kafka] Kafka publish failed 
with: circuit breaker is open

2018/02/09 15:08:13.110793 client.go:220: DBG [kafka] finished kafka batch

2018/02/09 15:08:13.110814 client.go:234: DBG [kafka] Kafka publish failed 
with: circuit breaker is open

2018/02/09 15:08:13.149482 metrics.go:39: INFO Non-zero metrics in the last 
30s: beat.info.uptime.ms=3 beat.memstats.gc_next=18583840 
beat.memstats.memory_alloc=17667464 beat.memstats.memory_total=3883962925856 
filebeat.harvester.open_files=11 filebeat.harvester.running=10 
libbeat.config.module.running=0 libbeat.output.events.batches=832 
libbeat.output.events.failed=1703936 libbeat.output.events.total=1703936 
libbeat.pipeline.clients=2 libbeat.pipeline.events.active=4118 
libbeat.pipeline.events.retry=1703936 registrar.states.current=21

 

 

Please tell me how to debug this.

 

I am using logstash 6.1 as consumer.

 

-- 
Thanks

http://www.oracle.com/
Sandeep Sarkar | Member of Technical Staff
Phone: HYPERLINK "tel:+918067445685"+918067445685 
Oracle CGBU PM

Oracle India Bangalore 

http://www.oracle.com/commitment

Oracle is committed to developing practices and products that help protect the 
environment

 

 


zstd support status

2018-02-09 Thread Ivan Babrou
Hello,

I'm very interested in having zstd support in Kafka. There's a ticket, a
KIP and even a PR:

* https://issues.apache.org/jira/browse/KAFKA-4514
*
https://cwiki.apache.org/confluence/display/KAFKA/KIP-110%3A+Add+Codec+for+ZStandard+Compression
* https://github.com/apache/kafka/pull/2267

What does it take to get it through the finish line?


Jenkins build is back to normal : kafka-trunk-jdk8 #2404

2018-02-09 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk9 #387

2018-02-09 Thread Apache Jenkins Server
See 


Changes:

[me] KAFKA-6513: Corrected how Converters and HeaderConverters are

--
[...truncated 1.47 MB...]

kafka.zk.KafkaZkClientTest > testBrokerSequenceIdMethods PASSED

kafka.zk.KafkaZkClientTest > testCreateSequentialPersistentPath STARTED

kafka.zk.KafkaZkClientTest > testCreateSequentialPersistentPath PASSED

kafka.zk.KafkaZkClientTest > testConditionalUpdatePath STARTED

kafka.zk.KafkaZkClientTest > testConditionalUpdatePath PASSED

kafka.zk.KafkaZkClientTest > testTopicAssignmentMethods STARTED

kafka.zk.KafkaZkClientTest > testTopicAssignmentMethods PASSED

kafka.zk.KafkaZkClientTest > testPropagateIsrChanges STARTED

kafka.zk.KafkaZkClientTest > testPropagateIsrChanges PASSED

kafka.zk.KafkaZkClientTest > testDeleteRecursive STARTED

kafka.zk.KafkaZkClientTest > testDeleteRecursive PASSED

kafka.zk.KafkaZkClientTest > testDelegationTokenMethods STARTED

kafka.zk.KafkaZkClientTest > testDelegationTokenMethods PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytesWithCompression 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytesWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > 
testIteratorIsConsistentWithCompression STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > 
testIteratorIsConsistentWithCompression PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testIteratorIsConsistent 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testIteratorIsConsistent PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testWrittenEqualsRead STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testWrittenEqualsRead PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes PASSED

kafka.javaapi.consumer.ZookeeperConsumerConnectorTest > testBasic STARTED

kafka.javaapi.consumer.ZookeeperConsumerConnectorTest > testBasic PASSED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > testSessionExpireListenerMetrics STARTED

kafka.metrics.MetricsTest > testSessionExpireListenerMetrics PASSED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > testClusterIdMetric STARTED

kafka.metrics.MetricsTest > testClusterIdMetric PASSED

kafka.metrics.MetricsTest > testControllerMetrics STARTED

kafka.metrics.MetricsTest > testControllerMetrics PASSED

kafka.metrics.MetricsTest > testWindowsStyleTagNames STARTED

kafka.metrics.MetricsTest > testWindowsStyleTagNames PASSED

kafka.metrics.MetricsTest > testMetricsLeak STARTED

kafka.metrics.MetricsTest > testMetricsLeak PASSED

kafka.metrics.MetricsTest > testBrokerTopicMetricsBytesInOut STARTED

kafka.metrics.MetricsTest > testBrokerTopicMetricsBytesInOut PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer STARTED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.security.auth.ResourceTypeTest > testJavaConversions STARTED

kafka.security.auth.ResourceTypeTest > testJavaConversions PASSED

kafka.security.auth.ResourceTypeTest > testFromString STARTED

kafka.security.auth.ResourceTypeTest > testFromString PASSED

kafka.security.auth.OperationTest > testJavaConversions STARTED

kafka.security.auth.OperationTest > testJavaConversions PASSED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled STARTED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled PASSED

kafka.security.auth.ZkAuthorizationTest > testZkUtils STARTED

kafka.security.auth.ZkAuthorizationTest > testZkUtils PASSED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration STARTED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testZkMigration STARTED

kafka.security.auth.ZkAuthorizationTest > testZkMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testChroot STARTED

kafka.security.auth.ZkAuthorizationTest > testChroot PASSED

kafka.security.auth.ZkAuthorizationTest > testDelete STARTED

kafka.security.auth.ZkAuthorizationTest > testDelete PASSED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive STARTED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive PASSED

kafka.security.auth.AclTest > testAclJsonConversion STARTED

kafka.security.auth.AclTest > testAclJsonConversion PASSED


Re: Getting started with contributions

2018-02-09 Thread Aditya Vivek
Hi Matthias, my jira id is adityavivek94.

On Sat, Feb 10, 2018 at 3:24 AM, Matthias J. Sax 
wrote:

> Can your share your JIRA id, so we can add you to the list of
> contributors? This allows you to assign Jiras to yourself.
>
>
> -Matthias
>
> On 2/9/18 12:34 PM, Aditya Vivek wrote:
> > Hi team,
> >
> > I've followed the instructions given here
> > https://cwiki.apache.org/confluence/display/KAFKA/
> Contributing+Code+Changes
> > and forked the main repo. How do I go about getting added to the
> > contributor list and picking up one of the starter jiras linked on the
> how
> > to contribute page?
> >
> > Thanks,
> > Aditya
> >
>
>


Jenkins build is back to normal : kafka-trunk-jdk9 #386

2018-02-09 Thread Apache Jenkins Server
See 




Re: [VOTE] 1.0.1 RC0

2018-02-09 Thread Ewen Cheslack-Postava
Just a heads up that we had a fix for KAFKA-6529 land to fix a file
descriptor leak. So this RC is dead and I'll be generating RC1 soon.

Thanks,
Ewen

On Wed, Feb 7, 2018 at 11:06 AM, Vahid S Hashemian <
vahidhashem...@us.ibm.com> wrote:

> Hi Ewen,
>
> +1
>
> Building from source and running the quickstart were successful on Ubuntu
> and Windows 10.
>
> Thanks for running the release.
> --Vahid
>
>
>
> From:   Ewen Cheslack-Postava 
> To: dev@kafka.apache.org, us...@kafka.apache.org,
> kafka-clie...@googlegroups.com
> Date:   02/05/2018 07:49 PM
> Subject:[VOTE] 1.0.1 RC0
>
>
>
> Hello Kafka users, developers and client-developers,
>
> Sorry for a bit of delay, but I've now prepared the first candidate for
> release of Apache Kafka 1.0.1.
>
> This is a bugfix release for the 1.0 branch that was first released with
> 1.0.0 about 3 months ago. We've fixed 46 significant issues since that
> release. Most of these are non-critical, but in aggregate these fixes will
> have significant impact. A few of the more significant fixes include:
>
> * KAFKA-6277: Make loadClass thread-safe for class loaders of Connect
> plugins
> * KAFKA-6185: Selector memory leak with high likelihood of OOM in case of
> down conversion
> * KAFKA-6269: KTable state restore fails after rebalance
> * KAFKA-6190: GlobalKTable never finishes restoring when consuming
> transactional messages
>
> Release notes for the 1.0.1 release:
> https://urldefense.proofpoint.com/v2/url?u=http-3A__home.
> apache.org_-7Eewencp_kafka-2D1.0.1-2Drc0_RELEASE-5FNOTES.
> html=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-
> kjJc7uSVcviKUc=Z45uiGWoLkCQ5hYes5SiOy1n_pA3ih4Cvmr5W32xx98=
> l1iKa9gDVsN8n73JUsdMj2b_8vCXjo6ZlhPjlHnwLa4=
>
>
> *** Please download, test and vote by Thursday, Feb 8, 12pm PT ***
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> https://urldefense.proofpoint.com/v2/url?u=http-3A__kafka.
> apache.org_KEYS=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_
> itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=Z45uiGWoLkCQ5hYes5SiOy1n_
> pA3ih4Cvmr5W32xx98=FMJWV-i3KbNT9eWV7mxnb9vLofAG8UOyqf13nC60HT0=
>
>
> * Release artifacts to be voted upon (source and binary):
> https://urldefense.proofpoint.com/v2/url?u=http-3A__home.
> apache.org_-7Eewencp_kafka-2D1.0.1-2Drc0_=DwIBaQ=jf_
> iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=
> Z45uiGWoLkCQ5hYes5SiOy1n_pA3ih4Cvmr5W32xx98=wfEb6h21ejMltBiWDsND5C_
> iAR1asfxwSVKbbmNwDRQ=
>
>
> * Maven artifacts to be voted upon:
> https://urldefense.proofpoint.com/v2/url?u=https-3A__
> repository.apache.org_content_groups_staging_=DwIBaQ=jf_
> iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=
> Z45uiGWoLkCQ5hYes5SiOy1n_pA3ih4Cvmr5W32xx98=
> YVQzF4zQchi3ru3UYkgkhgC2LnRRf_NFl1iJId4Iw2Q=
>
>
> * Javadoc:
> https://urldefense.proofpoint.com/v2/url?u=http-3A__home.
> apache.org_-7Eewencp_kafka-2D1.0.1-2Drc0_javadoc_=
> DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-
> kjJc7uSVcviKUc=Z45uiGWoLkCQ5hYes5SiOy1n_pA3ih4Cvmr5W32xx98=
> Y7hXIhHxDGb-M7d6kLZaargoYcLW6kH3agSdqO1SuwQ=
>
>
> * Tag to be voted upon (off 1.0 branch) is the 1.0.1 tag:
> https://urldefense.proofpoint.com/v2/url?u=https-3A__github.
> com_apache_kafka_tree_1.0.1-2Drc0=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_
> itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=Z45uiGWoLkCQ5hYes5SiOy1n_
> pA3ih4Cvmr5W32xx98=L729TlgNpT-y8WQzeZTsNATg1zFfAsCpXBhXfbu6UXk=
>
>
>
> * Documentation:
> https://urldefense.proofpoint.com/v2/url?u=http-3A__kafka.
> apache.org_10_documentation.html=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_
> itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=Z45uiGWoLkCQ5hYes5SiOy1n_
> pA3ih4Cvmr5W32xx98=DYynoi4X5K3p9DwzxkGYp8vprFK4qvPPQtO1IvQEbME=
>
>
> * Protocol:
> https://urldefense.proofpoint.com/v2/url?u=http-3A__kafka.
> apache.org_10_protocol.html=DwIBaQ=jf_iaSHvJObTbx-
> siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=
> Z45uiGWoLkCQ5hYes5SiOy1n_pA3ih4Cvmr5W32xx98=_BLA3u9JgZKeJ0Kwij9_
> 2J3lnxt8rCCXmptRh4OUPic=
>
>
>
> Please test and verify the release artifacts and submit a vote for this
> RC,
> or report any issues so we can fix them and get a new RC out ASAP!
> Although
> this release vote requires PMC votes to pass, testing, votes, and bug
> reports are valuable and appreciated from everyone.
>
> Thanks,
> Ewen
>
>
>
>
>


[jira] [Resolved] (KAFKA-6513) New Connect header support doesn't define `converter.type` property correctly

2018-02-09 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-6513.
--
   Resolution: Fixed
Fix Version/s: 1.2.0

Issue resolved by pull request 4512
[https://github.com/apache/kafka/pull/4512]

> New Connect header support doesn't define `converter.type` property correctly
> -
>
> Key: KAFKA-6513
> URL: https://issues.apache.org/jira/browse/KAFKA-6513
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 1.1.0
>Reporter: Randall Hauch
>Assignee: Randall Hauch
>Priority: Blocker
> Fix For: 1.2.0, 1.1.0
>
>
> The recent feature (KAFKA-5142) added a new {{converter.type}} to make the 
> {{Converter}} implementations now implement {{Configurable}}. However, the 
> worker is not correctly setting these new property types and is instead 
> incorrectly assuming the existing {{Converter}} implementations will set 
> them. For example:
> {noformat}
> Exception in thread "main" org.apache.kafka.common.config.ConfigException: 
> Missing required configuration "converter.type" which has no default value.
> at 
> org.apache.kafka.common.config.ConfigDef.parseValue(ConfigDef.java:472)
> at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:462)
> at 
> org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:62)
> at 
> org.apache.kafka.connect.storage.ConverterConfig.(ConverterConfig.java:48)
> at 
> org.apache.kafka.connect.json.JsonConverterConfig.(JsonConverterConfig.java:59)
> at 
> org.apache.kafka.connect.json.JsonConverter.configure(JsonConverter.java:284)
> at 
> org.apache.kafka.connect.runtime.isolation.Plugins.newConfiguredPlugin(Plugins.java:77)
> at 
> org.apache.kafka.connect.runtime.isolation.Plugins.newConverter(Plugins.java:208)
> at org.apache.kafka.connect.runtime.Worker.(Worker.java:107)
> at 
> io.confluent.connect.replicator.ReplicatorApp.config(ReplicatorApp.java:104)
> at 
> io.confluent.connect.replicator.ReplicatorApp.main(ReplicatorApp.java:60)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Getting started with contributions

2018-02-09 Thread Matthias J. Sax
Can your share your JIRA id, so we can add you to the list of
contributors? This allows you to assign Jiras to yourself.


-Matthias

On 2/9/18 12:34 PM, Aditya Vivek wrote:
> Hi team,
> 
> I've followed the instructions given here
> https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Code+Changes
> and forked the main repo. How do I go about getting added to the
> contributor list and picking up one of the starter jiras linked on the how
> to contribute page?
> 
> Thanks,
> Aditya
> 



signature.asc
Description: OpenPGP digital signature


Getting started with contributions

2018-02-09 Thread Aditya Vivek
Hi team,

I've followed the instructions given here
https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Code+Changes
and forked the main repo. How do I go about getting added to the
contributor list and picking up one of the starter jiras linked on the how
to contribute page?

Thanks,
Aditya


Question about developer documentation

2018-02-09 Thread Ray Chiang

There is some documentation for developers at:

  http://kafka.apache.org/project

There's also another set of links at the bottom of this wiki page:

  https://cwiki.apache.org/confluence/display/KAFKA/Index

There's some minor duplication of information, but it's definitely not 
quite presented in a clean "step by step" manner.


I think it could benefit from a reorganization of how the information is 
presented.  Before I start making suggestions, does anyone have any 
thoughts on the subject?


-Ray



Jenkins build is back to normal : kafka-0.11.0-jdk7 #355

2018-02-09 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-6407) Sink task metrics are the same for all connectors

2018-02-09 Thread Robert Yokota (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Yokota resolved KAFKA-6407.
--
Resolution: Duplicate

> Sink task metrics are the same for all connectors
> -
>
> Key: KAFKA-6407
> URL: https://issues.apache.org/jira/browse/KAFKA-6407
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 1.0.0
>Reporter: Alexander Koval
>Priority: Minor
>
> I have a lot of sink connectors inside a distributed worker. When I tried to 
> save metrics to graphite I discovered all task metrics are identical.
> {code}
> $>get -b 
> kafka.connect:type=sink-task-metrics,connector=prom-by-catalog-company,task=0 
> sink-record-read-total
> #mbean = 
> kafka.connect:type=sink-task-metrics,connector=prom-by-catalog-company,task=0:
> sink-record-read-total = 228744.0;
> $>get -b 
> kafka.connect:type=sink-task-metrics,connector=prom-kz-catalog-product,task=0 
> sink-record-read-total
> #mbean = 
> kafka.connect:type=sink-task-metrics,connector=prom-kz-catalog-product,task=0:
> sink-record-read-total = 228744.0;
> $>get -b 
> kafka.connect:type=sink-task-metrics,connector=prom-ru-catalog-company,task=0 
> sink-record-read-total
> #mbean = 
> kafka.connect:type=sink-task-metrics,connector=prom-ru-catalog-company,task=0:
> sink-record-read-total = 228744.0;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-4632) Kafka Connect WorkerSinkTask.closePartitions doesn't handle WakeupException

2018-02-09 Thread Randall Hauch (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-4632.
--
   Resolution: Fixed
Fix Version/s: 0.10.0.1
   0.10.1.0

I'm going to close this as fixed in 0.10.0.1. [~ScottReynolds], if you 
disagree, please feel free to reopen with more detail.

> Kafka Connect WorkerSinkTask.closePartitions doesn't handle WakeupException
> ---
>
> Key: KAFKA-4632
> URL: https://issues.apache.org/jira/browse/KAFKA-4632
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.0.0, 0.10.0.1, 0.10.1.0
>Reporter: Scott Reynolds
>Priority: Major
> Fix For: 0.10.1.0, 0.10.0.1
>
>
> WorkerSinkTask's closePartitions method isn't handling WakeupException that 
> can be thrown from commitSync.
> {code}
> org.apache.kafka.common.errors.WakeupException
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.maybeTriggerWakeup
>  (ConsumerNetworkClient.java:404)
> at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll 
> (ConsumerNetworkClient.java:245)
> at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll 
> (ConsumerNetworkClient.java:180)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync
>  (ConsumerCoordinator.java:499)
> at org.apache.kafka.clients.consumer.KafkaConsumer.commitSync 
> (KafkaConsumer.java:1104)
> at org.apache.kafka.connect.runtime.WorkerSinkTask.doCommitSync 
> (WorkerSinkTask.java:245)
> at org.apache.kafka.connect.runtime.WorkerSinkTask.doCommit 
> (WorkerSinkTask.java:264)
> at org.apache.kafka.connect.runtime.WorkerSinkTask.commitOffsets 
> (WorkerSinkTask.java:305)
> at org.apache.kafka.connect.runtime.WorkerSinkTask.closePartitions 
> (WorkerSinkTask.java:435)
> at org.apache.kafka.connect.runtime.WorkerSinkTask.execute 
> (WorkerSinkTask.java:147)
> at org.apache.kafka.connect.runtime.WorkerTask.doRun (WorkerTask.java:140)
> at org.apache.kafka.connect.runtime.WorkerTask.run (WorkerTask.java:175)
> at java.util.concurrent.Executors$RunnableAdapter.call (Executors.java:511)
> at java.util.concurrent.FutureTask.run (FutureTask.java:266)
> at java.util.concurrent.ThreadPoolExecutor.runWorker 
> (ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run 
> (ThreadPoolExecutor.java:617)
> at java.lang.Thread.run (Thread.java:745)
> {code}
> I believe it should catch it and ignore it as that is what the poll method 
> does when isStopping is true
> {code:java}
> } catch (WakeupException we) {
> log.trace("{} consumer woken up", id);
> if (isStopping())
> return;
> if (shouldPause()) {
> pauseAll();
> } else if (!pausedForRedelivery) {
> resumeAll();
> }
> }
> {code}
> But unsure, love some insight into this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk8 #2403

2018-02-09 Thread Apache Jenkins Server
See 


Changes:

[rajinisivaram] KAFKA-6529: Stop file descriptor leak when client disconnects 
with

--
[...truncated 413.02 KB...]
kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica 
STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldSurviveFastLeaderChange STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldSurviveFastLeaderChange PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
offsetsShouldNotGoBackwards STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
offsetsShouldNotGoBackwards PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldFollowLeaderEpochBasicWorkflow STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldFollowLeaderEpochBasicWorkflow PASSED


Build failed in Jenkins: kafka-trunk-jdk9 #385

2018-02-09 Thread Apache Jenkins Server
See 


Changes:

[rajinisivaram] KAFKA-6529: Stop file descriptor leak when client disconnects 
with

--
[...truncated 1.69 MB...]
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.event.AbstractBroadcastDispatch.dispatch(AbstractBroadcastDispatch.java:42)
at 
org.gradle.internal.event.BroadcastDispatch$SingletonDispatch.dispatch(BroadcastDispatch.java:230)
at 
org.gradle.internal.event.BroadcastDispatch$SingletonDispatch.dispatch(BroadcastDispatch.java:149)
at 
org.gradle.internal.event.AbstractBroadcastDispatch.dispatch(AbstractBroadcastDispatch.java:58)
at 
org.gradle.internal.event.BroadcastDispatch$CompositeDispatch.dispatch(BroadcastDispatch.java:324)
at 
org.gradle.internal.event.BroadcastDispatch$CompositeDispatch.dispatch(BroadcastDispatch.java:234)
at 
org.gradle.internal.event.ListenerBroadcast.dispatch(ListenerBroadcast.java:140)
at 
org.gradle.internal.event.ListenerBroadcast.dispatch(ListenerBroadcast.java:37)
at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
at com.sun.proxy.$Proxy79.output(Unknown Source)
at 
org.gradle.api.internal.tasks.testing.results.StateTrackingTestResultProcessor.output(StateTrackingTestResultProcessor.java:87)
at 
org.gradle.api.internal.tasks.testing.results.AttachParentTestResultProcessor.output(AttachParentTestResultProcessor.java:48)
at jdk.internal.reflect.GeneratedMethodAccessor196.invoke(Unknown 
Source)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.dispatch.FailureHandlingDispatch.dispatch(FailureHandlingDispatch.java:29)
at 
org.gradle.internal.dispatch.AsyncDispatch.dispatchMessages(AsyncDispatch.java:133)
at 
org.gradle.internal.dispatch.AsyncDispatch.access$000(AsyncDispatch.java:34)
at 
org.gradle.internal.dispatch.AsyncDispatch$1.run(AsyncDispatch.java:73)
at 
org.gradle.internal.operations.BuildOperationIdentifierPreservingRunnable.run(BuildOperationIdentifierPreservingRunnable.java:39)
at 
org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:63)
at 
org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:46)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
at 
org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:55)
at java.base/java.lang.Thread.run(Thread.java:844)
Caused by: java.io.IOException: No space left on device
at java.base/java.io.FileOutputStream.writeBytes(Native Method)
at java.base/java.io.FileOutputStream.write(FileOutputStream.java:332)
at com.esotericsoftware.kryo.io.Output.flush(Output.java:154)
... 54 more
java.io.IOException: No space left on device
com.esotericsoftware.kryo.KryoException: java.io.IOException: No space left on 
device
at com.esotericsoftware.kryo.io.Output.flush(Output.java:156)
at com.esotericsoftware.kryo.io.Output.require(Output.java:134)
at com.esotericsoftware.kryo.io.Output.writeBoolean(Output.java:578)
at 
org.gradle.internal.serialize.kryo.KryoBackedEncoder.writeBoolean(KryoBackedEncoder.java:63)
at 
org.gradle.api.internal.tasks.testing.junit.result.TestOutputStore$Writer.onOutput(TestOutputStore.java:99)
at 
org.gradle.api.internal.tasks.testing.junit.result.TestReportDataCollector.onOutput(TestReportDataCollector.java:141)
at jdk.internal.reflect.GeneratedMethodAccessor194.invoke(Unknown 
Source)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.event.AbstractBroadcastDispatch.dispatch(AbstractBroadcastDispatch.java:42)
at 

RE: [DISCUSS]KIP-235 DNS alias and secured connections

2018-02-09 Thread Skrzypek, Jonathan
Hi,

I have raised a PR https://github.com/apache/kafka/pull/4485 with suggested 
code changes.
There are however reported failures, don't understand what's the issue since 
tests are passing.
Any ideas ?


Jonathan Skrzypek 

-Original Message-
From: Skrzypek, Jonathan [Tech] 
Sent: 29 January 2018 16:51
To: dev@kafka.apache.org
Subject: RE: [DISCUSS]KIP-235 DNS alias and secured connections

Hi,

Yes I believe this might address what you're seeing as well.

Jonathan Skrzypek
Middleware Engineering
Messaging Engineering
Goldman Sachs International

-Original Message-
From: Stephane Maarek [mailto:steph...@simplemachines.com.au]
Sent: 06 December 2017 10:43
To: dev@kafka.apache.org
Subject: RE: [DISCUSS]KIP-235 DNS alias and secured connections

Hi Jonathan

I think this will be very useful. I reported something similar here :
https://urldefense.proofpoint.com/v2/url?u=https-3A__issues.apache.org_jira_browse_KAFKA-2D4781=DwIFaQ=7563p3e2zaQw0AB1wrFVgyagb2IE5rTZOYPxLxfZlX4=nNmJlu1rR_QFAPdxGlafmDu9_r6eaCbPOM0NM1EHo-E=3R1dVnw5Ttyz1YbVIMSRNMz2gjWsQmbTNXl63kwXvKo=MywacMwh18eVH_NvLY6Ffhc3CKMh43Tai3WMUf9PsjM=
 

Please confirm your kip will address it ?

Stéphane

On 6 Dec. 2017 8:20 pm, "Skrzypek, Jonathan" 
wrote:

> True, amended the KIP, thanks.
>
> Jonathan Skrzypek
> Middleware Engineering
> Messaging Engineering
> Goldman Sachs International
>
>
> -Original Message-
> From: Tom Bentley [mailto:t.j.bent...@gmail.com]
> Sent: 05 December 2017 18:19
> To: dev@kafka.apache.org
> Subject: Re: [DISCUSS]KIP-235 DNS alias and secured connections
>
> Hi Jonathan,
>
> It might be worth mentioning in the KIP that this is necessary only 
> for
> *Kerberos* on SASL, and not other SASL mechanisms. Reading the JIRA it 
> makes sensem, but I was confused up until that point.
>
> Cheers,
>
> Tom
>
> On 5 December 2017 at 17:53, Skrzypek, Jonathan 
> 
> wrote:
>
> > Hi,
> >
> > I would like to discuss a KIP I've submitted :
> > https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.apache.or
> > g_
> > confluence_display_KAFKA_KIP-2D=DwIBaQ=7563p3e2zaQw0AB1wrFVgyagb
> > 2I
> > E5rTZOYPxLxfZlX4=nNmJlu1rR_QFAPdxGlafmDu9_r6eaCbPOM0NM1EHo-E=GWK
> > XA
> > ILbqxFU2j7LtoOx9MZ00uy_jJcGWWIG92CyAuc=fv5WAkOgLhVOmF4vhEzq_39CWnE
> > o0 q0AJbqhAuDFDT0=
> > 235%3A+Add+DNS+alias+support+for+secured+connection
> >
> > Feedback and suggestions welcome !
> >
> > Regards,
> > Jonathan Skrzypek
> > Middleware Engineering
> > Messaging Engineering
> > Goldman Sachs International
> > Christchurch Court - 10-15 Newgate Street London EC1A 7HD
> > Tel: +442070512977
> >
> >
>


Re: [VOTE] KIP-251: Allow timestamp manipulation in Processor API

2018-02-09 Thread Bill Bejeck
Thanks for the KIP, +1 for me.

-Bill

On Fri, Feb 9, 2018 at 6:45 AM, Damian Guy  wrote:

> Thanks Matthias, +1
>
> On Fri, 9 Feb 2018 at 02:42 Ted Yu  wrote:
>
> > +1
> >  Original message From: "Matthias J. Sax" <
> > matth...@confluent.io> Date: 2/8/18  6:05 PM  (GMT-08:00) To:
> > dev@kafka.apache.org Subject: [VOTE] KIP-251: Allow timestamp
> > manipulation in Processor API
> > Hi,
> >
> > I want to start the vote for KIP-251:
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 251%3A+Allow+timestamp+manipulation+in+Processor+API
> >
> >
> > -Matthias
> >
> >
>


[jira] [Created] (KAFKA-6551) Unbounded queues in WorkerSourceTask cause OutOfMemoryError

2018-02-09 Thread Gunnar Morling (JIRA)
Gunnar Morling created KAFKA-6551:
-

 Summary: Unbounded queues in WorkerSourceTask cause 
OutOfMemoryError
 Key: KAFKA-6551
 URL: https://issues.apache.org/jira/browse/KAFKA-6551
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Reporter: Gunnar Morling


A Debezium user reported an {{OutOfMemoryError}} to us, with over 50,000 
messages in the {{WorkerSourceTask#outstandingMessages}} map.

This map is unbounded and I can't see any way of "rate limiting" which would 
control how many records are added to it. Growth can only indirectly be limited 
by reducing the offset flush interval, but as connectors can return large 
amounts of messages in single {{poll()}} calls that's not sufficient in all 
cases. Note the user reported this issue during snapshotting a database, i.e. a 
high number of records arrived in a very short period of time.

To solve the problem I'd suggest to make this map backpressure-aware and thus 
prevent its indefinite growth, so that no further records will be polled from 
the connector until messages have been taken out of the map again.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6550) UpdateMetadataRequest should be lazily created

2018-02-09 Thread huxihx (JIRA)
huxihx created KAFKA-6550:
-

 Summary: UpdateMetadataRequest should be lazily created
 Key: KAFKA-6550
 URL: https://issues.apache.org/jira/browse/KAFKA-6550
 Project: Kafka
  Issue Type: Improvement
  Components: controller
Affects Versions: 1.0.0
Reporter: huxihx
Assignee: huxihx


In ControllerBrokerRequestBatch.sendRequestsToBrokers, there is no need to 
eagerly construct the UpdateMetadataRequest.Builder since sometimes 
updateMetadataRequestBrokerSet is actually empty. In those cases, we should 
defer the construction to the time when we really need them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-251: Allow timestamp manipulation in Processor API

2018-02-09 Thread Damian Guy
Thanks Matthias, +1

On Fri, 9 Feb 2018 at 02:42 Ted Yu  wrote:

> +1
>  Original message From: "Matthias J. Sax" <
> matth...@confluent.io> Date: 2/8/18  6:05 PM  (GMT-08:00) To:
> dev@kafka.apache.org Subject: [VOTE] KIP-251: Allow timestamp
> manipulation in Processor API
> Hi,
>
> I want to start the vote for KIP-251:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-251%3A+Allow+timestamp+manipulation+in+Processor+API
>
>
> -Matthias
>
>


[jira] [Resolved] (KAFKA-6548) Migrate committed offsets from ZooKeeper to Kafka

2018-02-09 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/KAFKA-6548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sönke Liebau resolved KAFKA-6548.
-
Resolution: Not A Problem

Hi [~ppmanikandan...@gmail.com] 

it sounds like in principle you know what you want to do and are on the right 
track. I don't think that this should be filed in jira, but is rather something 
that would be well placed on the kafka users mailing list or stackoverflow. 
Actually I just found that you also posted this to 
[stackoverflow|https://stackoverflow.com/questions/48696705/migrate-zookeeper-offset-details-to-kafka]
 and received an answer there, so I'll close this issue as I think further 
discussion is better placed on SO.

> Migrate committed offsets from ZooKeeper to Kafka
> -
>
> Key: KAFKA-6548
> URL: https://issues.apache.org/jira/browse/KAFKA-6548
> Project: Kafka
>  Issue Type: Improvement
>  Components: offset manager
>Affects Versions: 0.10.0.0
> Environment: Windows
>Reporter: Manikandan P
>Priority: Minor
>
> We were using previous version of Kafka(0.8.X) where all the offset details 
> were stored in ZooKeeper. 
> Now we moved to new version of Kafka(0.10.X) where all the Topic offset 
> details are stored in Kafka itself. 
> We have to move all the Topic offset details to ZooKeeper to Kafka for 
> existing application in Production.
> Kafka is installed in Windows machine. we can't run kafka-consumer-groups.sh 
> from windows.
> Please advice how to migrate committed offsets from ZooKeeper to Kafka.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6549) Deadlock in ZookeeperClient while processing Controller Events

2018-02-09 Thread Manikumar (JIRA)
Manikumar created KAFKA-6549:


 Summary: Deadlock in ZookeeperClient while processing Controller 
Events
 Key: KAFKA-6549
 URL: https://issues.apache.org/jira/browse/KAFKA-6549
 Project: Kafka
  Issue Type: Bug
Reporter: Manikumar
Assignee: Manikumar
 Attachments: td.txt

Stack traces from a single node test cluster that was deadlocked while 
processing controller Reelect and Expire events. Attached stack-trace.

{quote}

"main-EventThread" #18 daemon prio=5 os_prio=31 tid=0x7f83e4285800 
nid=0x7d03 waiting on condition [0x7278b000]
 java.lang.Thread.State: WAITING (parking)
 at sun.misc.Unsafe.park(Native Method)
 - parking to wait for <0x0007bccadf30> (a 
java.util.concurrent.CountDownLatch$Sync)
 at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
 at 
kafka.controller.KafkaController$Expire.waitUntilProcessed(KafkaController.scala:1505)
 at 
kafka.controller.KafkaController$$anon$7.beforeInitializingSession(KafkaController.scala:163)
 at 
kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$$anonfun$process$2$$anonfun$apply$mcV$sp$6.apply(ZooKeeperClient.scala:365)
 at 
kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$$anonfun$process$2$$anonfun$apply$mcV$sp$6.apply(ZooKeeperClient.scala:365)
 at scala.collection.Iterator$class.foreach(Iterator.scala:891)
 at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
 at scala.collection.MapLike$DefaultValuesIterable.foreach(MapLike.scala:206)
 at 
kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$$anonfun$process$2.apply$mcV$sp(ZooKeeperClient.scala:365)
 at 
kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$$anonfun$process$2.apply(ZooKeeperClient.scala:363)
 at 
kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$$anonfun$process$2.apply(ZooKeeperClient.scala:363)
 at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:250)
 at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:258)
 at 
kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$.process(ZooKeeperClient.scala:363)
 at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:531)
 at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506)

Locked ownable synchronizers:
 - <0x000780054860> (a 
java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
 
"controller-event-thread" #42 prio=5 os_prio=31 tid=0x7f83e4293800 
nid=0xad03 waiting on condition [0x73fd3000]
 java.lang.Thread.State: WAITING (parking)
 at sun.misc.Unsafe.park(Native Method)
 - parking to wait for <0x0007bcc584a0> (a 
java.util.concurrent.CountDownLatch$Sync)
 at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
 at kafka.zookeeper.ZooKeeperClient.handleRequests(ZooKeeperClient.scala:148)
 at kafka.zk.KafkaZkClient.retryRequestsUntilConnected(KafkaZkClient.scala:1439)
 at 
kafka.zk.KafkaZkClient.kafka$zk$KafkaZkClient$$retryRequestUntilConnected(KafkaZkClient.scala:1432)
 at 
kafka.zk.KafkaZkClient.registerZNodeChangeHandlerAndCheckExistence(KafkaZkClient.scala:1171)
 at 
kafka.controller.KafkaController$Reelect$.process(KafkaController.scala:1475)
 at 
kafka.controller.ControllerEventManager$ControllerEventThread$$anonfun$doWork$1.apply$mcV$sp(ControllerEventManager.scala:69)
 at 
kafka.controller.ControllerEventManager$ControllerEventThread$$anonfun$doWork$1.apply(ControllerEventManager.scala:69)
 at 
kafka.controller.ControllerEventManager$ControllerEventThread$$anonfun$doWork$1.apply(ControllerEventManager.scala:69)
 at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:31)
 at 
kafka.controller.ControllerEventManager$ControllerEventThread.doWork(ControllerEventManager.scala:68)
 at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82)

"kafka-shutdown-hook" #14 prio=5 os_prio=31 tid=0x7f83e29b1000 nid=0x560f 
waiting on condition [0x75208000]
 java.lang.Thread.State: WAITING (parking)
 at sun.misc.Unsafe.park(Native Method)
 - parking to wait for <0x000780054860> (a