Re: why kafka index file use memory mapped files ,however log file doesn't

2018-02-12 Thread Vincent Dautremont
Just a guess : wouldn't it be because the log files on disk can be made of compressed data when produced but needs to be uncompressed on consumption (of a single message) ? 2018-02-12 15:50 GMT+01:00 YuFeng Shen : > Hi jan , > > I think the reason is the same as why index file

Re: kafka broker loosing offsets?

2017-10-11 Thread Vincent Dautremont
if it ever happen again in the future. We’ll also upgrade all our clusters to 0.11.0.1 in the next days. 爛! > Le 11 oct. 2017 à 17:47, Dmitriy Vsekhvalnov <dvsekhval...@gmail.com> a écrit > : > > Yeah just pops up in my list. Thanks, i'll take a look. > > Vincent Dautremo

Re: Incorrect consumer offsets after broker restart 0.11.0.0

2017-10-11 Thread Vincent Dautremont
I would also like to know the related Jira ticket if any, to check that what I experience the same phenomenon. I see this happening even without restarting the kafka broker process : I sometime have a Zookeeper socket that fails, the Kafka broker then step down from its leader duties for a few

Re: kafka broker loosing offsets?

2017-10-06 Thread Vincent Dautremont
is there a way to read messages on a topic partition from a specific node we that we choose (and not by the topic partition leader) ? I would like to read myself that each of the __consumer_offsets partition replicas have the same consumer group offset written in it in it. On Fri, Oct 6, 2017 at

Re: kafka broker loosing offsets?

2017-10-06 Thread Vincent Dautremont
Hi, I'm having the same setup as Dimitry, I've experienced exactly the same issue already 2 times this last month. (the only difference with Dimitry's setup is that I have librdkafka 0.9.5 clients. It's like if the __consumer_offsets partitions were not synced but still reported as synced (and so

consumer group offset chaos

2017-09-26 Thread Vincent Dautremont
Hi, I've recently experienced a reset of consumer group offset on a cluster of 3 Kafka nodes (v0.11.0.0). I use 3 high level consumers using librdkafka 0.9.4 They first ask the consumer group assigned partition offsets just after each rebalance and before consuming anything. every offset related

Re: is a topic compressed?

2017-09-21 Thread Vincent Dautremont
Hi, Snappy keeps a lot of parts in plain text : look that example where only "pedia" is encoded/tokenized in the sentence. https://en.wikipedia.org/wiki/Snappy_(compression) > Wikipedia is a free, web-based, collaborative, multilingual encyclopedia > project. your data is then probably

How to scripting Zookeeper - Kafka startup procedure as step by step

2017-05-23 Thread Vincent Dautremont
Hi, Working on a kafka project I'm trying to set up Integration test using docker to have Zookeeper and Kafka clusters and my client(s) program and some kafkacat clients on a docker network. To set up this work context I need to script each action and I guess I have a beginner problem about

Re: Causes for Kafka messages being unexpectedly delivered more than once? The 'exactly once' semantic

2017-04-13 Thread Vincent Dautremont
One of the case where you would get a message more than once is if you get disconnected / kicked off the consumer group / etc if you fail to commit offset for messages you have already read. What I do is that I insert the message in a in-memory cache redis database. If it fails to insert because

Re: best practices to handle large messages

2017-04-06 Thread Vincent Dautremont
Hi, you might be interested by this presentation https://www.slideshare.net/JiangjieQin/handle-large-messages-in-apache-kafka-58692297 On Wed, Apr 5, 2017 at 1:27 AM, Mohammad Kargar wrote: > What are best practices to handle large messages (2.5 MB) in Kafka? > > Thanks, >

Minor documentation error

2016-11-21 Thread Vincent Dautremont
Hi, I just want to raise a flag concerning an error in the documentation. it says : > *fetch.max.wait.ms* > The maximum amount of time the server will block before answering the > fetch request if there isn't sufficient data to immediately satisfy the > requirement given by fetch.min.bytes.

Re: Kafka 0.10 Monitoring tool

2016-11-17 Thread Vincent Dautremont
or while executing consumer group command null > org.apache.kafka.common.errors.DisconnectException > > Any help is appreciable? > > > Thanks > Achintya > > -Original Message- > From: Vincent Dautremont [mailto:vincent.dautrem...@olamobile.com] > Sen

Re: Kafka 0.10 Monitoring tool

2016-11-16 Thread Vincent Dautremont
Just a note on that matter Sam : http://mail-archives.apache.org/mod_mbox/kafka-users/201611.mbox/%3CCAD2WViSAgwc9i4-9xEw1oz1xzpsbveFt1%3DSZ0qkHRiFEc3fXbw%40mail.gmail.com%3E On Tue, Nov 15, 2016 at 5:26 PM, Sam Pegler wrote: > If the consumer group is

Re: Checking the consumer lag when using manual partition assignment with the KafkaConsumer

2016-11-15 Thread Vincent Dautremont
gt; > > > Thanks, > > > > Jiangjie (Becket) Qin > > > > On Sun, Nov 6, 2016 at 4:00 PM, Vincent Dautremont < > > vincent.dautrem...@olamobile.com> wrote: > > > >> By the way I remember having read somewhere on this list that this > ut

Kafka and compression

2016-11-14 Thread Vincent Dautremont
Hi, Can anyone explain me in more detail how Kakfa works with compression ? I've read the doc but it's not all clear to me. - There are compression settings on the broker, the topic of a broker, a producer. Are they all the same setting and one takes precedence on another ? - Is there a

Kafka and compression

2016-11-10 Thread Vincent Dautremont
Hi, Can anyone explain me in more detail how Kakfa works with compression ? I've read the doc but it's not all clear to me. - There are compression settings on the broker, the topic of a broker, a producer. Are they all the same setting and one takes precedence on another ? - Is there a

Re: Checking the consumer lag when using manual partition assignment with the KafkaConsumer

2016-11-06 Thread Vincent Dautremont
By the way I remember having read somewhere on this list that this utility not showing info for consumer groups that do not have current active consumers was a bug . That would be a thing to fix, is there an expected fix date / fix release for this? > Le 6 nov. 2016 à 13:23, Robert Metzger

Re: How to keep consumers alive without polling new messages

2016-09-28 Thread Vincent Dautremont
I had the same problem : Call pause() on all partitions. Then continue your loop that calls consume(), it will then poll without consuming messages. When you want to consume again, call resume() on all partition It's not obvious at all, the doc should explain that in the documentation of

Kafka Consumer group (High level consumer)

2016-09-22 Thread Vincent Dautremont
Hi, I'm looking for *consumer group* related settings of the Kafka server/cluster. - how can we tell the server to delete a consumer group if it has been inactive longer than a specific time ? - can this period be infinite ? - can this setting be specific to a consumer group ? - can there be a

How to set the offset of a topic:partition for a specific consumer group to repay / reconsume messages ?

2016-09-05 Thread Vincent Dautremont
Hi, This seems like a basic question but I can't find the answer : I'm trying to find the right tool (in kafka/bin ) to set an offset value of a topic:partition for a specific consumer-group in order to replay consumed messages this link tells how to get the offset of the topic:partition of a