No it is not common/expected. It will be difficult to give you any
useful advise without your logs - please send over your state change
logs and broker logs.
On Mon, Apr 21, 2014 at 11:36:34PM -0500, Kashyap Mhaisekar wrote:
No. Is this error common? How to overcome this?
Regards,
Kashyap
If you only have 1 replica, when the broker the replica is down, the
partition will have no leader. A broker can be down due to soft failures.
See
https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-Whypartitionleadersmigratethemselvessometimes
?
Thanks,
Jun
On Mon, Apr 21, 2014 at 1:36
Hi,
I wanted to set the message expiry for a message on a kafka topic. Is there
anything like this in kafka?
I came across a property - *log.retention.hours* and
*topic.log.retention.hours*
Had some queries around it.And it was mentioned that
topic.log.retention.hours is per topic configuration.
Which version of Kafka are you using?
You can read up on the configuration options here:
http://kafka.apache.org/documentation.html#configuration
You can specify time-based retention using log.retention.minutes which
will apply to all topics. You can override that on per-topic basis -
see
Hi,
Please help me understand how one should estimate upper limit for
log.retention.bytes in this situation.
Let's say kafka cluster has 3 machines (broker per machine) with 15TB disk
space per machine.
Cluster will have one topic with 30 partitions and replication factor 2.
My thinking is:
I'm updating the latest offset consumed to the zookeeper directory.
Say for eg if my last consumed message has offset of 5 i update it in the
path,but when i check zookeeper path it has 6 after sometimes.
Does any other process updates it ?.
From:
Please find my code to commit offset;
public void handleAfterConsumption(MessageAndMetadataK, P mAndM) {
String commitPerThread =
props.getProperty(commitperthread,N);
DESMetadata metadata= new
DESMetadata(mAndM.topic(),
Yes, your estimate is correct.
Thanks,
Jun
On Tue, Apr 22, 2014 at 6:16 PM, Andrey Yegorov andrey.yego...@gmail.comwrote:
Hi,
Please help me understand how one should estimate upper limit for
log.retention.bytes in this situation.
Let's say kafka cluster has 3 machines (broker per
Do you have auto commit disabled?
Thanks,
Jun
On Tue, Apr 22, 2014 at 7:10 PM, Seshadri, Balaji
balaji.sesha...@dish.comwrote:
I'm updating the latest offset consumed to the zookeeper directory.
Say for eg if my last consumed message has offset of 5 i update it in the
path,but when i
Yes I disabled it.
My doubt is the path should have offset to be consumed or last consumed offset.
-Original Message-
From: Jun Rao [mailto:jun...@gmail.com]
Sent: Tuesday, April 22, 2014 9:52 PM
To: users@kafka.apache.org
Subject: Re: commitOffsets by partition 0.8-beta
Do you have
Thanks Joel. Am using version 2.8.0.
Thanks,
Kashyap
On Tue, Apr 22, 2014 at 5:53 PM, Joel Koshy jjkosh...@gmail.com wrote:
Which version of Kafka are you using?
You can read up on the configuration options here:
http://kafka.apache.org/documentation.html#configuration
You can specify
11 matches
Mail list logo