If both C1,C2 belongs to same consumer group, then the re-balance will be
triggered.
A consumer subscribes to event changes of the consumer id registry within
its group.
On Mon, May 11, 2015 at 10:55 AM, dinesh kumar dinesh...@gmail.com wrote:
Hi,
I am looking at the code of
Hi,
I am looking at the code of kafka.consumer.ZookeeperConsumerConnector.scala
(link here
https://github.com/apache/kafka/blob/0.8.2/core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala)
and I see that all ids registered to a particular group ids are registered
to the path
But why? What is reason for triggering a rebalance if none of the topics of
a consumers are affected? Is there some reason for triggering a rebalance
irrespective of the consumers topics getting affected ?
On 11 May 2015 at 11:06, Manikumar Reddy ku...@nmsworks.co.in wrote:
If both C1,C2
Added to the wiki, which required adding a new Rust section :) Thanks for
the contribution, Yousuf!
On Sun, May 10, 2015 at 6:57 PM, Yousuf Fauzan yousuffau...@gmail.com
wrote:
Hi All,
I have create Kafka client for Rust. The client supports Metadata, Produce,
Fetch, and Offset requests. I
Hi,
I have a requirement in which I have configure producer consumer
asynchronously, so that for every 1 mb of data the queued message will be
passed.
Please provide some help.
Thanks
Hi All,
I have create Kafka client for Rust. The client supports Metadata, Produce,
Fetch, and Offset requests. I plan to add support of Consumers and Offset
management soon.
Will it be possible to get it added to
https://cwiki.apache.org/confluence/display/KAFKA/Clients
Info:
Pure Rust
@Gwen- But that only works for topics that have low enough traffic that you
would ever actually hit that timeout.
The Confluent schema registry needs to do something similar to make sure it
has fully consumed the topic it stores data in so it doesn't serve stale
data. We know in our case we'll
For Flume, we use the timeout configuration and catch the exception, with
the assumption that no messages for few seconds == the end.
On Sat, May 9, 2015 at 2:04 AM, James Cheng jch...@tivo.com wrote:
Hi,
I want to use the high level consumer to read all partitions for a topic,
and know when
Hi Jonathan,
I agree we can have topic-per-table, but some transactions may span
multiple tables and therefore will get applied partially out-of-order. I
suspect this can be a consistency issue and create a state that is
different than the state in the original database, but I don't have good
With mypipe (MySQL - Kafka) we've had a similar discussion re: topic names
and preserving transactions.
At this point:
- Kafka topic names are configurable allowing for per db or per table topics
- transactions maintain a transaction ID for each event when published into
Kafka
10 matches
Mail list logo