Graceful Shutdown always fails on multi-broker setup (Windows)

2018-05-09 Thread M. Manna
Hello, I have followed the graceful shutdown process by using the following (in addition to the default controlled.shutdown.enable) controlled.shutdown.max.retries=10 controlled.shutdown.retry.backoff.ms=3000 I am always having issues where not all the brokers are shutting gracefully. And it's

Kafka offset problem when using Spark Streaming...

2018-05-09 Thread Pena Quijada Alexander
Hi all, We're facing some problems with ours Spark Streaming jobs, from yesterday we have got the following error into our logs when the jobs fail: java.lang.AssertionError: assertion failed: Beginning offset 562747 is after the ending offset 562743 for topic elk-topic partition 0. Any help

Re: Graceful Shutdown always fails on multi-broker setup (Windows)

2018-05-09 Thread Jan Filipiak
Hi, this is expected. A gracefully shutdown means the broker is only shutting down when it is not the leader of any partition. Therefore you should not be able to gracefully shut down your entire cluster. Hope that helps Best Jan On 09.05.2018 12:02, M. Manna wrote: Hello, I have

Re: Fw: Re: Kafka-connect

2018-05-09 Thread Jagannath Bilgi
Thank you Williams.  Tried in different way and able to add Cassandra connector. However unable to fetch data in console. Below are details. {"name": "packs2","config" : { "tasks.max": "1",  "connector.class" : "com.datamountaineer.streamreactor.connect.cassandra.source.CassandraSourceConnector",

Help needed for Upgrade from 0.10.2 to 1.1

2018-05-09 Thread Darshan
Hi We were on Kafka 0.10.2.1. While upgrading to 1.1, we bring down all the 3 kafka brokers, and make the change in the config file as shown below which is recommend in http://kafka.apache.org/11/documentation.html#upgrade and restart the brokers: *inter.broker.protocol.version=1.1*

Re: Kafka offset problem when using Spark Streaming...

2018-05-09 Thread Ted Yu
Can you give us more information: release of Spark and Kafka you're using anything interesting from broker / client logs - feel free to use pastebin to pass snippet if needed Thanks On Wed, May 9, 2018 at 3:42 AM, Pena Quijada Alexander < a.penaquij...@reply.it> wrote: > Hi all, > > We're

Kafka as K/V store

2018-05-09 Thread Sudhir Babu Pothineni
We would like to use Kafka as a key/value store. We need put, get and subscribe a particular “key”. Any pointers how to do this? Thanks Sudhir

Re: Kafka offset problem when using Spark Streaming...

2018-05-09 Thread Matthias J. Sax
Hard to say. Might be a Spark issue though... On 5/9/18 3:42 AM, Pena Quijada Alexander wrote: > Hi all, > > We're facing some problems with ours Spark Streaming jobs, from yesterday we > have got the following error into our logs when the jobs fail: > > java.lang.AssertionError: assertion

Re: Log Cleanup Support for Windows [KAFKA-1194]

2018-05-09 Thread Martin Gainty
a patch could be rejected if: 1)there is no TestCase to prove the feature works 2)the patch causes failure in an existing testcase 3)the patch errors in an existing testcase 4)the patch works in only 1 version of OS (and doesnt work in other versions of OS) 5)implementing the patch will place a

Re: Kafka as K/V store

2018-05-09 Thread Matthias J. Sax
You might want to look into Kafka Streams. In particular KTable and Interactive Queries (IQ). A `put` would be a write to the table source topic, while a `get` can be implemented via IQ. For subscribe to particular key, you would consume the whole source topic and filter for the key you are

How to do failover in case of single replica topics on 3 node kafka cluster

2018-05-09 Thread Gangadhar Mylapuram
Hi, I have 3-node cluster with a single replica topic with three partitions. I am not using 3-way replica topic because my broker is using distributed backed. Node1: broker1 -> log.dir=/nfsexport/broker1 (say partition1 owner) Node2: broker2 -> log.dir=/nfsexport/broker2 (say partition2 owner)

Re: Log Cleanup Support for Windows [KAFKA-1194]

2018-05-09 Thread M. Manna
The tests have passed and my changes are covered by existing tests written for LogSegmentTest. I would be grateful if someone can confirm the same. On windows platform, some log/segment/index tests will always fail because of the file lock/unlock issue. but they all pass on Linux. Also, the build

Re: Kafka as K/V store

2018-05-09 Thread Sudhir Babu Pothineni
Thanks Matthias. > On May 9, 2018, at 10:57 AM, Matthias J. Sax wrote: > > You might want to look into Kafka Streams. In particular KTable and > Interactive Queries (IQ). > > A `put` would be a write to the table source topic, while a `get` can be > implemented via IQ. >

Re: How to do failover in case of single replica topics on 3 node kafka cluster

2018-05-09 Thread M. Manna
Hi, Assuming I got your question right - in 3-node setup, that's a "Cluster down" scenario if one of your broker goes down. The rule of thumb in DComp is ceil(N/2)-1 total failures are allowed - where N is your node. So what you are testing for, will require probably 2 more nodes. Regards,

Kafka Consumer Group cmd line Tool - Group is missing even though it's present

2018-05-09 Thread M. Manna
Hello, Based on the Quick start on Kafka site, I was trying to use the kafka-consumer-groups command line script PS C:\kafka_2.11-1.1.0\bin\windows> .\kafka-consumer-groups.bat >>> --new-consumer --bootstrap-server localhost:9092 --list >>> The [new-consumer] option is deprecated and will be

Log Compaction configuration over all topics in cluster

2018-05-09 Thread David Espinosa
Hi all, I would like to apply log compaction configuration for any topic in my kafka cluster, as default properties. These configuration properties are: - cleanup.policy - delete.retention.ms - segment.ms - min.cleanable.dirty.ratio I have tried to place them in the server.properties

Merging clusters or changing zk root

2018-05-09 Thread Luke Steensen
Hello, I suspect the answer is no, but I'm curious if there is any way to either change a running cluster's zookeeper chroot or perform a "merge" of two clusters such that their individual workloads can be distributed across the combined set of brokers. Thanks! Luke

Re: Graceful Shutdown always fails on multi-broker setup (Windows)

2018-05-09 Thread Jan Filipiak
Hi, yes, your case is the exception. In usual deployments kafka has to be there 100% all the time. So as the name rolling restart suggest, you usually upgrade / do maitenance on boxes (a few at a time) depending how your topics are laied our across brokers. On 09.05.2018 12:13, M. Manna

Log Cleanup Support for Windows [KAFKA-1194]

2018-05-09 Thread M. Manna
Hello, This issue has been outstanding for a while and impacting us both in development and deployment time. We have had to manually build kafka core jar and use it to work with Windows for over a year. The auto log/index cleanup feature is very important for us on Windows because it helps us

Re: Kafka as K/V store

2018-05-09 Thread Martin Gainty
//best to do your generic type declaration at class level and refer to later KafkaConsumer consumer = new KafkaConsumer<>(props); //best to do generic type declaration at class level and refer to later java.util.HashMap hashMap=new java.util.HashMap();

Re: Fw: Re: Kafka-connect

2018-05-09 Thread Burton Williams
good to see that you've got it figured out, almost. Is there data in Cassandra? did you check that? I have never used the Cassandra connector, so i don't know if you've set it up correctly. you'll have to start by checking at the source for data. Thats all I can help with at this point. Sorry -B