Re: Kafka Rebalance on Watcher event Question

2015-05-10 Thread Manikumar Reddy
If both C1,C2 belongs to same consumer group, then the re-balance will be triggered. A consumer subscribes to event changes of the consumer id registry within its group. On Mon, May 11, 2015 at 10:55 AM, dinesh kumar dinesh...@gmail.com wrote: Hi, I am looking at the code of

Kafka Rebalance on Watcher event Question

2015-05-10 Thread dinesh kumar
Hi, I am looking at the code of kafka.consumer.ZookeeperConsumerConnector.scala (link here https://github.com/apache/kafka/blob/0.8.2/core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala) and I see that all ids registered to a particular group ids are registered to the path

Re: Kafka Rebalance on Watcher event Question

2015-05-10 Thread dinesh kumar
But why? What is reason for triggering a rebalance if none of the topics of a consumers are affected? Is there some reason for triggering a rebalance irrespective of the consumers topics getting affected ? On 11 May 2015 at 11:06, Manikumar Reddy ku...@nmsworks.co.in wrote: If both C1,C2

Re: Kafka Client in Rust

2015-05-10 Thread Ewen Cheslack-Postava
Added to the wiki, which required adding a new Rust section :) Thanks for the contribution, Yousuf! On Sun, May 10, 2015 at 6:57 PM, Yousuf Fauzan yousuffau...@gmail.com wrote: Hi All, I have create Kafka client for Rust. The client supports Metadata, Produce, Fetch, and Offset requests. I

Asynchronous producer-consumer

2015-05-10 Thread Knowledge gatherer
Hi, I have a requirement in which I have configure producer consumer asynchronously, so that for every 1 mb of data the queued message will be passed. Please provide some help. Thanks

Kafka Client in Rust

2015-05-10 Thread Yousuf Fauzan
Hi All, I have create Kafka client for Rust. The client supports Metadata, Produce, Fetch, and Offset requests. I plan to add support of Consumers and Offset management soon. Will it be possible to get it added to https://cwiki.apache.org/confluence/display/KAFKA/Clients Info: Pure Rust

Re: Is there a way to know when I've reached the end of a partition (consumed all messages) when using the high-level consumer?

2015-05-10 Thread Ewen Cheslack-Postava
@Gwen- But that only works for topics that have low enough traffic that you would ever actually hit that timeout. The Confluent schema registry needs to do something similar to make sure it has fully consumed the topic it stores data in so it doesn't serve stale data. We know in our case we'll

Re: Is there a way to know when I've reached the end of a partition (consumed all messages) when using the high-level consumer?

2015-05-10 Thread Gwen Shapira
For Flume, we use the timeout configuration and catch the exception, with the assumption that no messages for few seconds == the end. On Sat, May 9, 2015 at 2:04 AM, James Cheng jch...@tivo.com wrote: Hi, I want to use the high level consumer to read all partitions for a topic, and know when

Re: Pulling Snapshots from Kafka, Log compaction last compact offset

2015-05-10 Thread Gwen Shapira
Hi Jonathan, I agree we can have topic-per-table, but some transactions may span multiple tables and therefore will get applied partially out-of-order. I suspect this can be a consistency issue and create a state that is different than the state in the original database, but I don't have good

Re: Pulling Snapshots from Kafka, Log compaction last compact offset

2015-05-10 Thread Hisham Mardam-Bey
With mypipe (MySQL - Kafka) we've had a similar discussion re: topic names and preserving transactions. At this point: - Kafka topic names are configurable allowing for per db or per table topics - transactions maintain a transaction ID for each event when published into Kafka