Re: Kafka cluster management lifecycle

2017-05-26 Thread Waleed Fateem
I might have misunderstood what you're asking for, but my understanding is that you were looking a way to have Kafka automatically remove a failed Kafka broker for you from the cluster. Doing so it would need to reassign partitions on that failed Kafka broker to the other brokers in your cluster.

Re: Kafka cluster management lifecycle

2017-05-26 Thread Roman Naumenko
Thanks Waleed, I did read those guides and basically it was the reason I've asked how Kafka is supposed to be managed. I believe managing small-ish cluster with 3-5, maybe dozen nodes is doable with scripts. But what happens on the scale betoubd that? -- Roman On Fri, May 26, 2017 at 4:19 PM

Re: Kafka cluster management lifecycle

2017-05-26 Thread Waleed Fateem
Hi Roman, I have not heard of an automated way to do this. You have to manually reassign partitions from the Kafka broker you're planning on removing from the cluster. Have a look at the section "decommissioning brokers" in the documentation:

Re: KIP-162: Enable topic deletion by default

2017-05-26 Thread Jim Jagielski
> On May 26, 2017, at 1:10 PM, Vahid S Hashemian > wrote: > > Gwen, thanks for the KIP. > It looks good to me. > > Just a minor suggestion: It would be great if the command asks for a > confirmation (y/n) before deleting the topic (similar to how removing ACLs >

AUTO: Yan Wang is out of the office (returning 06/01/2017)

2017-05-26 Thread Yan Wang
I am out of the office until 06/01/2017. Note: This is an automated response to your message "Re: [DISCUSS]: KIP-161: streams record processing exception handlers" sent on 5/26/2017 3:20:42 PM. This is the only notification you will receive while this person is away. ** This email and any

Re: [DISCUSS]: KIP-161: streams record processing exception handlers

2017-05-26 Thread Jan Filipiak
Hi Eno, that does make a lot more sense to me. when you pop stuff out of a topic you can at least put the coordinates (topicpartition,offset) additionally into the log wich is probably kinda nice to just fetch it from CLI an check whats going on. One additional question: This handler is

Kafka cluster management lifecycle

2017-05-26 Thread Roman Naumenko
Hi, We’re running Kafka in AWS with replication factor 2. There is a requirement to rotate servers periodically (or add new ones). Is there a way to make Kafka remove “failed” instances from cluster, rebalance automatically whatever it needs to rebalance and continue to work as usual? I’ve

Kafka connector throughput reduction upon avro schema change

2017-05-26 Thread Dave Hamilton
We are currently using the Kafka S3 connector to ship Avro data to S3. We made a change to one of our Avro schemas and have noticed consumer throughput on the Kafka connector drop considerably. I am wondering if there is anything we can do to avoid such issues when we update schemas in the

Re: KIP-162: Enable topic deletion by default

2017-05-26 Thread Vahid S Hashemian
Gwen, thanks for the KIP. It looks good to me. Just a minor suggestion: It would be great if the command asks for a confirmation (y/n) before deleting the topic (similar to how removing ACLs works). Thanks. --Vahid From: Gwen Shapira To: "d...@kafka.apache.org"

Re: SASL and SSL

2017-05-26 Thread Waleed Fateem
Hi Kaufman, Thanks for the blog link. It definitely helped clear up a few things, but I was struggling to understand the behavior I was seeing where clients were still able to establish an SSL connection after SASL authentication even when trust store config was not set at the client side and

Re: [DISCUSS]: KIP-161: streams record processing exception handlers

2017-05-26 Thread Damian Guy
In that case, though, every access to that key is doomed to failure as the database is corrupted. So i think it should probably die in a steaming heap at that point! On Fri, 26 May 2017 at 17:33 Eno Thereska wrote: > Hi Damian, > > I was thinking of cases when there is

Re: [DISCUSS]: KIP-161: streams record processing exception handlers

2017-05-26 Thread Eno Thereska
Hi Damian, I was thinking of cases when there is bit-rot on the storage itself and we get a malformed record that cannot be de-serialized. There is an interesting intersection here with CRCs in both Kafka (already there, they throw on deserialization) and potentially local storage (we don't

Re: Kafka Authorization and ACLs Broken

2017-05-26 Thread Kamalov, Alex
Hey Raghav, Yes, I would very much love to get your configs, so I can model against it. Thanks again, Alex From: Raghav Date: Thursday, May 25, 2017 at 10:54 PM To: Mike Marzo Cc: Darshan Purandare , Rajini

Re: [DISCUSS]: KIP-161: streams record processing exception handlers

2017-05-26 Thread Damian Guy
Eno, Under what circumstances would you get a deserialization exception from the state store? I can only think of the case where someone has provided a bad deserializer to a method that creates a state store. In which case it would be a user error and probably should just abort? Thanks, Damian

Broker asks other broker to shut down

2017-05-26 Thread Neil Moore
We have a case where one of our brokers running Kafka 0.10.1.1 is telling another one to shut down. There is nothing obvious (to me) in the logs of either source or destination of this message to explain why it would happen. Does anybody see anything in the logs (below) that I have missed, or

Re: [DISCUSS]: KIP-161: streams record processing exception handlers

2017-05-26 Thread Eno Thereska
See latest reply to Jan's note. I think I unnecessarily broadened the scope of this KIP to the point where it sounded like it handles all sorts of exceptions. The scope should be strictly limited to "poison pill" records for now. Will update KIP, Thanks Eno > On 26 May 2017, at 16:16,

Re: Kafka Authorization and ACLs Broken

2017-05-26 Thread Raghav
Hi Alex In fact I copied the same configuration that Rajini pasted above and it worked for me. You can try the same. Let me know if it doesn't work. Thanks. On Fri, May 26, 2017 at 4:19 AM, Kamalov, Alex wrote: > Hey Raghav, > > > > Yes, I would very much love to

Re: [DISCUSS]: KIP-161: streams record processing exception handlers

2017-05-26 Thread Eno Thereska
Hi Jan, You're right. I think I got carried away and broadened the scope of this KIP beyond it's original purpose. This handler will only be there for deserialization errors, i.e., "poison pills" and is not intended to be a catch-all handler for all sorts of other problems (e.g., NPE exception

Re: [DISCUSS]: KIP-161: streams record processing exception handlers

2017-05-26 Thread Matthias J. Sax
"bad" for this case would mean, that we got an `DeserializationException`. I am not sure if any other processing error should be covered? @Eno: this raises one one question. Might it be better to allow for two handlers instead of one? One for deserialization exception and one for all other

Re: vpn vs TimeoutException

2017-05-26 Thread Peter Sinoros Szabo
Hi, Do you know if a retry tries to use the same Broker connection or may reinitialize that connection too? Thanks, - Sini From: "Peter Sinoros Szabo" To: users@kafka.apache.org Date: 2017/05/25 17:01 Subject:vpn vs TimeoutException Hi,

Re: [DISCUSS]: KIP-161: streams record processing exception handlers

2017-05-26 Thread Matthias J. Sax
About `LogAndThresholdExceptionHandler`: If the handler needs to keep track of number of failed messages, than it becomes stateful -- not sure if we should do that. But maybe we can introduce 2 metrics (might be an interesting metric to report to the user anyway) and allow programmatic access to

Re: KIP-162: Enable topic deletion by default

2017-05-26 Thread Jorge Esteban Quilcate Otoya
+1 El vie., 26 may. 2017 a las 16:14, Matthias J. Sax () escribió: > +1 > > On 5/26/17 7:03 AM, Gwen Shapira wrote: > > Hi Kafka developers, users and friends, > > > > I've added a KIP to improve our out-of-the-box usability a bit: > > KIP-162: Enable topic deletion by

Re: KIP-162: Enable topic deletion by default

2017-05-26 Thread Matthias J. Sax
+1 On 5/26/17 7:03 AM, Gwen Shapira wrote: > Hi Kafka developers, users and friends, > > I've added a KIP to improve our out-of-the-box usability a bit: > KIP-162: Enable topic deletion by default: > https://cwiki.apache.org/confluence/display/KAFKA/KIP-162+-+Enable+topic+deletion+by+default >

KIP-162: Enable topic deletion by default

2017-05-26 Thread Gwen Shapira
Hi Kafka developers, users and friends, I've added a KIP to improve our out-of-the-box usability a bit: KIP-162: Enable topic deletion by default: https://cwiki.apache.org/confluence/display/KAFKA/KIP-162+-+Enable+topic+deletion+by+default Pretty simple :) Discussion and feedback are welcome.

Trouble with querying offsets when using new consumer groups API

2017-05-26 Thread Jerry George
Hi I had question about the new consumer APIs. I am having trouble retrieving the offsets once the consumers are *disconnected* when using new consumer v2 API. Following is what I am trying to do, *bin/kafka-consumer-groups.sh -new-consumer --bootstrap-server kafka:9092 --group group

File Transfers between two systems using Kafka

2017-05-26 Thread Mohammed Manna
Hello, I currently have the following understanding of KafkaProducer and KafkaConsumer: 1) If I send a file, it's broken down in lines using some default delimiter (LF or \n). 2) Therefore, if 2 producers publish 2 different files to the same topic, that doesn't mean that they are going as two

Re: Loss of Messages

2017-05-26 Thread Vinayak Sharma
Hi, I came across this jira issue(link ). for the above mentioned problem. Can you confirm if this is actually an issue in kafka or can the problem be solved by changing some configuration parameters. Regards, Vinayak. On 24 May 2017 at 16:27,

Re: [DISCUSS]: KIP-161: streams record processing exception handlers

2017-05-26 Thread Jan Filipiak
Hi unfortunatly no. Think about "caching" these records popping outta there or multiple step Tasks (join,aggregate,repartiton all in one go) last repartitioner might throw cause it cant determine the partition only because a get on the join store cause a flush through the aggregates. This

Re: [DISCUSS]: KIP-161: streams record processing exception handlers

2017-05-26 Thread Eno Thereska
Thanks Jan, The record passed to the handler will always be the problematic record. There are 2 cases/types of exceptions for the purposes of this KIP: 1) any exception during deserialization. The bad record + the exception (i.e. DeserializeException) will be passed to the handler. The handler

Re: [DISCUSS]: KIP-161: streams record processing exception handlers

2017-05-26 Thread Jan Filipiak
Hi, quick question: From the KIP it doesn't quite makes sense to me how that fits with caching. With caching the consumer record might not be at all related to some processor throwing while processing. would it not make more sense to get the ProcessorName + object object for processing and