Hello,
As part of the log cleaning which compacts the topic we saw that it
encountered the following exception
kafka.message.InvalidMessageException: Message found with corrupt size (0)
How do we proceed with either configuring log cleaner to skip it or some
way for us to remove that message ?
:
> Are you sure consumers are always up, when they are behind they could
> generate a lot of traffic in a small amount of time?
>
> On Mon, May 23, 2016 at 9:11 AM Anishek Agarwal wrote:
>
> > additionally all the read / writes are happening via storm topologies.
> >
>
additionally all the read / writes are happening via storm topologies.
On Mon, May 23, 2016 at 12:17 PM, Anishek Agarwal wrote:
> Hello,
>
> we are using 4 kafka machines in production with 4 topics and each topic
> either 16/32 partitions and replication factor of 2. each machine h
Hello,
we are using 4 kafka machines in production with 4 topics and each topic
either 16/32 partitions and replication factor of 2. each machine has 3
disks for kafka logs.
we see a strange behaviour where we see high disk usage spikes on one of
the disks on all machines. it varies over time, wi
esn't look like a standard '-', is it possible those
> options simply aren't being parsed correctly?
>
> -Ewen
>
> On Tue, Mar 8, 2016 at 12:26 AM, Anishek Agarwal
> wrote:
>
> > Hello
> >
> > following doc @
> >
> >
> http
Hello
following doc @
https://cwiki.apache.org/confluence/display/KAFKA/System+Tools#SystemTools-SimpleConsumerShell
i tried to print messages using the command
./kafka-run-class.sh kafka.tools.SimpleConsumerShell —-max-messages 1
--no-wait-at-logend —-print-offsets --partition 17 --offset 7644
Hello,
We have 4 topics deployed on 4 node kafka cluster. For one of the topic we
are trying to read data from beginning, using the kafka high level
consumer.
the topic has 32 partitions and we create 32 streams using high level
consumer so that one partition per stream is used, we then have 32
Hello,
i am trying to send compressed messages to kafka. topic/broker
configurations are default, i have provided "compression.type" of "snappy"
on kafka producer. the uncompressed message size is 1160668 bytes, error i
get is
*org.apache.kafka.common.errors.RecordTooLargeException: The message i