We are running into KAFKA-1641 where the log cleaner thread dies with the below
INFO. Is there a work around for this issue? We are running kafka-0.9.0.
2017-09-18 16:12:41,621 INFO kafka.log.LogCleaner: Cleaner 0: Building offset
map for log __consumer_offsets-16 for 2149 segments in offset
Kafka Version: 0.10.0.1 / 0.10.2.1
在 2017/9/19 9:44, Zor X.L. 写道:
Hi,
Recently in our experiment, we find that even though no resource usage
achieve 80%, consumers will slow down producer (which we were not
expected), especially when there is no message in the topic.
*We wonder if we
Hi,
Recently in our experiment, we find that even though no resource usage
achieve 80%, consumers will slow down producer (which we were not
expected), especially when there is no message in the topic.
*We wonder if we did something wrong (where?), or this is a Kafka's
Thanks, Guozhang.
On Mon, Sep 18, 2017 at 5:23 PM, Guozhang Wang wrote:
> It is available online now:
> https://www.confluent.io/kafka-summit-sf17/resource/
>
>
> Guozhang
>
> On Tue, Sep 19, 2017 at 8:13 AM, Raghav wrote:
>
> > Hi
> >
> > Just
It is available online now:
https://www.confluent.io/kafka-summit-sf17/resource/
Guozhang
On Tue, Sep 19, 2017 at 8:13 AM, Raghav wrote:
> Hi
>
> Just wondering if the videos are available anywhere from Kafka Summit 2017
> to watch ?
>
> --
> Raghav
>
--
-- Guozhang
Hi
Just wondering if the videos are available anywhere from Kafka Summit 2017
to watch ?
--
Raghav
Thanks, Vito .. that worked !
On Sun, Sep 17, 2017 at 9:02 PM, 鄭紹志 wrote:
> Hi, Karan,
>
> It looks like you need to add a property 'value.deserializer' to
> kafka-console-consumer.sh.
>
> For example:
> $ bin/kafka-console-consumer.sh --bootstrap-server localhost:9092
Hi Scott,
There is nothing preventing a replica running a newer version from being in
sync as long as the instructions are followed (i.e.
inter.broker.protocol.version has to be set correctly and, if there's a
message format change, log.message.format.version). That's why I asked
Yogesh for more
Hi everyone,
We've seen some instances of our consumer groups when running normally not
process any messages from some partitions for minutes while other
partitions are seeing regularly updates in seconds. In some cases when a
consumer group had a significant lag (hours of messages), some
Hi Hugues.
How 'big' are your transactions? In particular, how many produce records
are in a single transaction? Can you share your actual producer code?
Also, did you try the `kafka-producer-perf-test.sh` tool with a
transactional id and see what the latency is for transactions with that
tool?
Can we get some clarity on this point:
>older version leader is not allowing newer version replicas to be in sync,
so the data pushed using this older version leader
That is super scary.
What protocol version is the older version leader running?
Would this happen if you are skipping a protocol
Hi Yogesh,
Can you please clarify what you mean by "observing data loss"?
Ismael
On Mon, Sep 18, 2017 at 5:08 PM, Yogesh Sangvikar <
yogesh.sangvi...@gmail.com> wrote:
> Hi Team,
>
> Please help to find resolution for below kafka rolling upgrade issue.
>
> Thanks,
>
> Yogesh
>
> On Monday,
Hi Team,
Please help to find resolution for below kafka rolling upgrade issue.
Thanks,
Yogesh
On Monday, September 18, 2017 at 9:03:04 PM UTC+5:30, Yogesh Sangvikar
wrote:
>
> Hi Team,
>
> Currently, we are using confluent 3.0.0 kafka cluster in our production
> environment. And, we are
Hi,
I am testing an app with transactions on the producer side of kafka
(0.11.0.1) . I defined the producer config (see below) and added the
necessary lines in the app (#initTransaction, #begintransaction and
#commitTransaction) around the existing #send
The problem I am facing is that each
Understood, but since we haven't updated to use 5.7.3 yet, I think it's
best to test against what is currently deployed.
Thanks.
On Mon, Sep 18, 2017 at 9:56 AM, Ted Yu wrote:
> We're using rocksdb 5.3.6
>
> It would make more sense to perform next round of experiment
We're using rocksdb 5.3.6
It would make more sense to perform next round of experiment using rocksdb
5.7.3 which is latest.
Cheers
On Mon, Sep 18, 2017 at 5:00 AM, Bill Bejeck wrote:
> I'm following up from your other thread as well here. Thanks for the info
> above, that
Hi,
I just sent you a follow-up message on the other thread we have going
regarding state store performance.
I guess we can consider this thread closed and we'll continue working on
the State Store thread.
Thanks!
Bill
On Mon, Sep 18, 2017 at 7:27 AM, dev loper wrote:
>
I'm following up from your other thread as well here. Thanks for the info
above, that is helpful.
I think the AWS instance type might be a factor here, but let's do some
more homework first.
For a next step, we could enable logging for RocksDB so we can observe the
performance.
Here is some
Hi Ted, Damian, Bill & Sabarish,
I would like to thank you guys for all the help offered to solve this
issue. Seems like the persistent store was not scaling out as expected .
After state store builds up over period of time , the performance of the
kafka streams application was performing poorly
19 matches
Mail list logo