Hi Nicolas,
From my experience there are only two ways out:
1) wait for retention time to pass, so data gets deleted (this is usually
unacceptable)
2) trace offset of corrupt message on all affected subscriptions and skip
this message by overwriting it (offset+1)
Problem is, that when
Hello,
I'm using Confluent Kafka (0.8.2.0-cp). When I'm trying to process message
from my Kafka topic with Spark Streaming, I've got the following error :
kafka.message.InvalidMessageException: Message is corrupt (stored crc =
3561357254, computed crc = 171652633)
at