Thanks Harsha. In my case the replica doesn't catch up at all. the last log
date is 5 days ago. It seems the failed replica is excluded from
replication list. I am looking for a command that can add the replica back
to the ISR list or force it to start sync-up again
On Sat, Feb 28, 2015 at 4:27
you can increase num.replica.fetchers by default its 1 and also try
increasing replica.fetch.max.bytes
-Harsha
On Fri, Feb 27, 2015, at 11:15 PM, tao xiao wrote:
Hi team,
I had a replica node that was shutdown improperly due to no disk space
left. I managed to clean up the disk and restarted
What is the best way to detect consumer lag?
We are running each consumer as a separate group and I am running the
ConsumerOffsetChecker to assess the partitions and the lag for each
group/consumer. I run this every 5 minutes. In some cases I run this command up
to 75 times on each 5 min
Thanks! 245,146 years is more than enough for me.
On Sat, Feb 28, 2015 at 2:58 PM, Jay Kreps jay.kr...@gmail.com wrote:
It is totally reasonable to have unlimited retention. We don't have an
explicit setting for this but you can set the time based retention policy
to something large
Are you using Kafka based offset commit or ZK based offset commit?
On 2/28/15, 6:16 AM, Gene Robichaux gene.robich...@match.com wrote:
What is the best way to detect consumer lag?
We are running each consumer as a separate group and I am running the
ConsumerOffsetChecker to assess the
Here's an example of a frame that looks malformed, I've split the bytes
apart and annotated the pieces with the names from the Kafka protocol guide
(
https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol#AGuideToTheKafkaProtocol-FetchResponse
).
Notice that the
Hi,
Mostly out of curiosity I'm writing a Kafka client, and I'm getting stuck
at decoding fetch responses. Most of the time everything goes fine, but
quite often I get frames back that seem to be wrong. I'm sure I've
misunderstood something about the spec, so maybe someone can get me on the
right
It is totally reasonable to have unlimited retention. We don't have an
explicit setting for this but you can set the time based retention policy
to something large
log.retention.hours=2147483647
which will retain the log for 245,146 years. :-)
-Jay
On Fri, Feb 27, 2015 at 4:12 PM, Warren Kiser
I think we ZK based offset commit. However I am not certain, I would have to
get that from our DEV group. My role is PROD Ops.
Gene Robichaux
Manager, Database Operations
Match.com
8300 Douglas Avenue I Suite 800 I Dallas, TX 75225
-Original Message-
From: Jiangjie Qin
Bump... Looking for a Kafka Producer Object pool to use in Spark Streaming
inside foreachPartition
On Wed, Jan 14, 2015 at 8:40 PM, Josh J joshjd...@gmail.com wrote:
Hi,
Does anyone have a serializable kafka producer pool that uses the
KafkaProducer.crateProducer() method? I'm trying to use
Can you check if you replica fetcher thread is still running on broker 1?
Also, you may check the public access log on broker 5 to see if there are
fetch requests from broker 1.
On 2/28/15, 12:39 AM, tao xiao xiaotao...@gmail.com wrote:
Thanks Harsha. In my case the replica doesn't catch up at
Hi Puneet,
One of the conditions for K3 back to ISR is K3¹s log end offset to be
higher than the K1(leaderReplica)¹s high watermark.
If batch 2 is committed, then the leader high watermark will be above the
offsets of messages in batch 2.
In order to be added into ISR again, K3 has to at least
Is this you are looking for?
http://kafka.apache.org/07/documentation.html
On Fri, Feb 27, 2015 at 7:02 PM, Philip O'Toole
philip.oto...@yahoo.com.invalid wrote:
There used to be available a very lucid page describing Kafka 0.7, its
design, and the rationale behind certain decisions. I last
Can i do a offset reset based on time not by message id, since most of
times we know when the processing engine failed (based on time), so it is
easy to reset the offset to that time for reprocessing.
--
SunilKalva
If it is ZK based offset commit, you can use the ConsumerOffsetChecker tool
in kafka.tools.
On Sat, Feb 28, 2015 at 12:32 PM, Gene Robichaux gene.robich...@match.com
wrote:
I think we ZK based offset commit. However I am not certain, I would have
to get that from our DEV group. My role is PROD
Guangle,
The deletion of the segment log / index files are async, i.e. when Kafka
decide to clean the logs, it only adds a suffix .deleted to the files
such that it will not be access any more by other Kafka threads. The actual
file deletion will be executed later, with period controlled by
You can use the offset request (kafka.api.OffsetRequest) with the given
timestamp to get the offset whose message is published before the
timestamp. Note that it is coarsen grained that the message returned maybe
at worst one log file segment earlier than the timestamp.
On Sat, Feb 28, 2015 at
That the message set size was always the same when this happened made me
start looking for that number, and it turned out that was what I had set as
the MaxBytes field of the requests. This make me think that what happens is
that the next message to fetch is larger than this, but it's sent anyway.
Hi Honghai,
1. If a partition has no leader (i.e. all of its replicas are down) it will
become offline, and hence the metadata response will not have this
partition's info.
2. Any of the brokers cache metadata and hence can handle the metadata
request. It's just that their cache are updated
19 matches
Mail list logo