hi,
as far as i understand, log retention time in kafka will delete message
that's older than the retention time. i'm wonder what it implies for
consumer since i'm using simple consumer to manage offsets in predefined
consumer group.
say i have a list of messages for a partition of topic:
1,2,3,
offset commit and fetch requests to any
> broker. Kafka-backed consumer offsets is currently in trunk and will
> be released in 0.8.2.
>
> Thanks,
>
> Joel
>
> On Mon, Aug 04, 2014 at 02:57:02PM -0700, Weide Zhang wrote:
> > Hi
> >
> > It seems to me tha
Hi
It seems to me that 0.8.1.1 doesn't have the ConsumerMetadata API. So what
broker I should choose in order to commit and fetch offset information ?
Shall I use zookeeper for offset to manage it manually instead ?
Thanks,
Weide
On Sun, Aug 3, 2014 at 4:34 PM, Weide Zhang wrote:
ects as a consumer. This could be
> accomplished
> > by watching Zookeeper and getting a notification when A's ephemeral node
> is
> > removed.
> >
> > The high level consumer does seem to be the way to go as long as your
> > application can handle duplicate
Hi,
I'm reading the offset management on the API link.
https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol#AGuideToTheKafkaProtocol-OffsetCommit/FetchAPI
I have a couple of questions regarding using the offset fetch and commit
API in 0.8.1.1 ?
1. Is the new offset com
t;
>
> On Fri, Aug 1, 2014 at 3:20 PM, Weide Zhang wrote:
>
> > Hi,
> >
> > I have a use case for a master slave cluster where the logic inside
> master
> > need to consume data from kafka and publish some aggregated data to kafka
> > again. When master di
Hi,
I have a use case for a master slave cluster where the logic inside master
need to consume data from kafka and publish some aggregated data to kafka
again. When master dies, slave need to take the latest committed offset
from master and continue consuming the data from kafka and doing the pus
Hi,
What's the way to find a topic's partition count dynamically using
simpleconsumer api ?
If I use one seed broker within a cluster of 10 brokers, and add list of
topic name into the simple consumer request to find topics' metadata, when
it returns,
is the size of partitionsMetadata per topicme
e response message set
size is at least fetchSize ? I assuming the fetchRequest call to Kafka is a
blocking call.
Thanks a lot,
Weide
On Thu, Jul 10, 2014 at 3:50 PM, Guozhang Wang wrote:
> Yes it can be shared.
>
> Guozhang
>
>
> On Thu, Jul 10, 2014 at 11:12 AM, Weide Zha
> Guozhang
>
>
> On Tue, Jul 1, 2014 at 5:14 PM, Weide Zhang wrote:
>
> > Hi ,
> >
> > Just want to ask some basic question about kafka simple consumer.
> >
> > 1. if I'm using simple consumer and doesn't really depend on zookeeper to
>
Hi ,
Just want to ask some basic question about kafka simple consumer.
1. if I'm using simple consumer and doesn't really depend on zookeeper to
manage partition offset. (application manage offset themselves). Will that
remove the zookeeper dependency for consumer ?
2. if zookeeper dies, will sim
Hi,
I have a question regarding load balancing within a consumer group.
Say I have a consumer group of 4 consumers which subscribe to 4 topics ,
each of which have one partition. Will there be rebalancing happening on
topic level ? Or I will expect consumer 1 have all the data ?
Weide
Hi,
According to the Kafka documentation, seems Java 1.7 is recommended. But
for our production environment we are still using java 1.6. Will that be a
problem of using java 1.6 and use Kafka 0.8.1.1 ?
Thanks a lot,
Weide
Hi Guozhang,
In worst case zookeeper dies for say 1 hour and come back up, will things
still be recovered automatically after 1 hour ?
Weide
On Wed, May 14, 2014 at 8:28 AM, Guozhang Wang wrote:
> In 0.8, the servers and consumers are heavily dependent on ZK to function.
> With ZK down, the s
Can kafka survive when zookeeper is down and not connectable ? Will the
consumer or producer still work in that case ?
Weide
r is if the mirror maker is unable to produce
> messages, for example, if the network goes down. If it can still consume
> messages, but cannot produce them, you will lose messages as the consumer
> will continue to commit offsets with no knowledge that the producer is
> failing.
>
&g
Hi,
I have a question about mirror maker. say I have 3 data centers each
producing topic 'A' with separate kafka cluster running. if 3 of the data
need to be kept in sync with each other, shall i create 3 mirror maker in
each data center to get the data from the other two ?
also, it mentioned tha
17 matches
Mail list logo