Corrupt Message in 0.8.2.2

2016-06-16 Thread Anishek Agarwal
Hello, As part of the log cleaning which compacts the topic we saw that it encountered the following exception kafka.message.InvalidMessageException: Message found with corrupt size (0) How do we proceed with either configuring log cleaner to skip it or some way for us to remove that message ?

Re: kafka 0.8.2 broker behaviour

2016-05-24 Thread Anishek Agarwal
: > Are you sure consumers are always up, when they are behind they could > generate a lot of traffic in a small amount of time? > > On Mon, May 23, 2016 at 9:11 AM Anishek Agarwal wrote: > > > additionally all the read / writes are happening via storm topologies. > > >

Re: kafka 0.8.2 broker behaviour

2016-05-23 Thread Anishek Agarwal
additionally all the read / writes are happening via storm topologies. On Mon, May 23, 2016 at 12:17 PM, Anishek Agarwal wrote: > Hello, > > we are using 4 kafka machines in production with 4 topics and each topic > either 16/32 partitions and replication factor of 2. each machine h

kafka 0.8.2 broker behaviour

2016-05-22 Thread Anishek Agarwal
Hello, we are using 4 kafka machines in production with 4 topics and each topic either 16/32 partitions and replication factor of 2. each machine has 3 disks for kafka logs. we see a strange behaviour where we see high disk usage spikes on one of the disks on all machines. it varies over time, wi

Re: SimpleConsumerShell not honouring all options

2016-03-08 Thread Anishek Agarwal
esn't look like a standard '-', is it possible those > options simply aren't being parsed correctly? > > -Ewen > > On Tue, Mar 8, 2016 at 12:26 AM, Anishek Agarwal > wrote: > > > Hello > > > > following doc @ > > > > > http

SimpleConsumerShell not honouring all options

2016-03-08 Thread Anishek Agarwal
Hello following doc @ https://cwiki.apache.org/confluence/display/KAFKA/System+Tools#SystemTools-SimpleConsumerShell i tried to print messages using the command ./kafka-run-class.sh kafka.tools.SimpleConsumerShell —-max-messages 1 --no-wait-at-logend —-print-offsets --partition 17 --offset 7644

does leader partition block ? 0.8.2

2016-03-02 Thread Anishek Agarwal
Hello, We have 4 topics deployed on 4 node kafka cluster. For one of the topic we are trying to read data from beginning, using the kafka high level consumer. the topic has 32 partitions and we create 32 streams using high level consumer so that one partition per stream is used, we then have 32

Compressed message size limits

2015-11-24 Thread Anishek Agarwal
Hello, i am trying to send compressed messages to kafka. topic/broker configurations are default, i have provided "compression.type" of "snappy" on kafka producer. the uncompressed message size is 1160668 bytes, error i get is *org.apache.kafka.common.errors.RecordTooLargeException: The message i