Hi,
Thanks for reply.
But is there a way one could skip this corrupt record from Flink consumer?
Flink job goes in a loop, it restarts and then fails again at same record.


On Mon, 1 Apr 2019, 07:34 Congxian Qiu, <qcx978132...@gmail.com> wrote:

> Hi
> As you said, consume from ubuntu terminal has the same error, maybe you
> could send a email to kafka user maillist.
>
> Best, Congxian
> On Apr 1, 2019, 05:26 +0800, Sushant Sawant <sushantsawant7...@gmail.com>,
> wrote:
>
> Hi team,
> I am facing this exception,
>
> org.apache.kafka.common.KafkaException: Received exception when fetching
> the next record from topic_log-3. If needed, please seek past the record to
> continue consumption.
>
>         at
> org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.fetchRecords(Fetcher.java:1076)
>
>         at
> org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.access$1200(Fetcher.java:944)
>
>         at
> org.apache.kafka.clients.consumer.internals.Fetcher.fetchRecords(Fetcher.java:567)
>
>         at
> org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:528)
>
>         at
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1086)
>
>         at
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1043)
>
>         at
> org.apache.flink.streaming.connectors.kafka.internal.KafkaConsumerThread.run(KafkaConsumerThread.java:257)
>
> Caused by: org.apache.kafka.common.errors.CorruptRecordException: Record
> size is less than the minimum record overhead (14)
>
>
> Also, when I consume message from ubuntu terminal consumer, I get same
> error.
>
> How can skip this corrupt record?
>
>
>
>
>

Reply via email to