Just a guess : wouldn't it be because the log files on disk can be made of
compressed data when produced but needs to be uncompressed on consumption
(of a single message) ?
2018-02-12 15:50 GMT+01:00 YuFeng Shen :
> Hi jan ,
>
> I think the reason is the same as why index file
if it ever happen again in the future.
We’ll also upgrade all our clusters to 0.11.0.1 in the next days.
爛!
> Le 11 oct. 2017 à 17:47, Dmitriy Vsekhvalnov <dvsekhval...@gmail.com> a écrit
> :
>
> Yeah just pops up in my list. Thanks, i'll take a look.
>
> Vincent Dautremo
I would also like to know the related Jira ticket if any, to check that
what I experience the same phenomenon.
I see this happening even without restarting the kafka broker process :
I sometime have a Zookeeper socket that fails, the Kafka broker then step
down from its leader duties for a few
is there a way to read messages on a topic partition from a specific node
we that we choose (and not by the topic partition leader) ?
I would like to read myself that each of the __consumer_offsets partition
replicas have the same consumer group offset written in it in it.
On Fri, Oct 6, 2017 at
Hi,
I'm having the same setup as Dimitry, I've experienced exactly the same
issue already 2 times this last month.
(the only difference with Dimitry's setup is that I have librdkafka 0.9.5
clients.
It's like if the __consumer_offsets partitions were not synced but still
reported as synced (and so
Hi,
I've recently experienced a reset of consumer group offset on a cluster of
3 Kafka nodes (v0.11.0.0).
I use 3 high level consumers using librdkafka 0.9.4
They first ask the consumer group assigned partition offsets just after
each rebalance and before consuming anything.
every offset related
Hi,
Snappy keeps a lot of parts in plain text :
look that example where only "pedia" is encoded/tokenized in the sentence.
https://en.wikipedia.org/wiki/Snappy_(compression)
> Wikipedia is a free, web-based, collaborative, multilingual encyclopedia
> project.
your data is then probably
Hi,
Working on a kafka project I'm trying to set up Integration test using docker
to have Zookeeper and Kafka clusters and my client(s) program and some kafkacat
clients on a docker network.
To set up this work context I need to script each action and I guess I have a
beginner problem about
One of the case where you would get a message more than once is if you get
disconnected / kicked off the consumer group / etc if you fail to commit
offset for messages you have already read.
What I do is that I insert the message in a in-memory cache redis database.
If it fails to insert because
Hi,
you might be interested by this presentation
https://www.slideshare.net/JiangjieQin/handle-large-messages-in-apache-kafka-58692297
On Wed, Apr 5, 2017 at 1:27 AM, Mohammad Kargar wrote:
> What are best practices to handle large messages (2.5 MB) in Kafka?
>
> Thanks,
>
Hi,
I just want to raise a flag concerning an error in the documentation.
it says :
> *fetch.max.wait.ms*
> The maximum amount of time the server will block before answering the
> fetch request if there isn't sufficient data to immediately satisfy the
> requirement given by fetch.min.bytes.
or while executing consumer group command null
> org.apache.kafka.common.errors.DisconnectException
>
> Any help is appreciable?
>
>
> Thanks
> Achintya
>
> -Original Message-
> From: Vincent Dautremont [mailto:vincent.dautrem...@olamobile.com]
> Sen
Just a note on that matter Sam :
http://mail-archives.apache.org/mod_mbox/kafka-users/201611.mbox/%3CCAD2WViSAgwc9i4-9xEw1oz1xzpsbveFt1%3DSZ0qkHRiFEc3fXbw%40mail.gmail.com%3E
On Tue, Nov 15, 2016 at 5:26 PM, Sam Pegler
wrote:
> If the consumer group is
gt; >
> > Thanks,
> >
> > Jiangjie (Becket) Qin
> >
> > On Sun, Nov 6, 2016 at 4:00 PM, Vincent Dautremont <
> > vincent.dautrem...@olamobile.com> wrote:
> >
> >> By the way I remember having read somewhere on this list that this
> ut
Hi,
Can anyone explain me in more detail how Kakfa works with compression ?
I've read the doc but it's not all clear to me.
- There are compression settings on the broker, the topic of a broker, a
producer.
Are they all the same setting and one takes precedence on another ?
- Is there a
Hi,
Can anyone explain me in more detail how Kakfa works with compression ?
I've read the doc but it's not all clear to me.
- There are compression settings on the broker, the topic of a broker, a
producer.
Are they all the same setting and one takes precedence on another ?
- Is there a
By the way I remember having read somewhere on this list that this utility not
showing info for consumer groups that do not have current active consumers was
a bug .
That would be a thing to fix, is there an expected fix date / fix release for
this?
> Le 6 nov. 2016 à 13:23, Robert Metzger
I had the same problem :
Call pause() on all partitions.
Then continue your loop that calls consume(), it will then poll without
consuming messages.
When you want to consume again, call resume() on all partition
It's not obvious at all, the doc should explain that in the documentation of
Hi,
I'm looking for *consumer group* related settings of the Kafka
server/cluster.
- how can we tell the server to delete a consumer group if it has been
inactive longer than a specific time ?
- can this period be infinite ?
- can this setting be specific to a consumer group ?
- can there be a
Hi,
This seems like a basic question but I can't find the answer :
I'm trying to find the right tool (in kafka/bin ) to set an offset value of
a topic:partition for a specific consumer-group in order to replay consumed
messages
this link tells how to get the offset of the topic:partition of a
20 matches
Mail list logo