[
https://issues.apache.org/jira/browse/KAFKA-7635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16802888#comment-16802888
]
Paul Whalen commented on KAFKA-7635:
------------------------------------
For what it's worth, my team is also running 2.0.0 and seems to have
encountered this error in our development environment after doing some
maintenance work to expand the cluster. Restarting the broker did not fix the
issue, the replica fetcher thread would still die in short order. We
ultimately wiped the data directory and restarted the broker to get back in a
healthy state.
{code:java}
ERROR [ReplicaFetcher replicaId=3, leaderId=1, fetcherId=0] Error due to
(kafka.server.ReplicaFetcherThread)
org.apache.kafka.common.KafkaException: Error processing data for partition
topic.a-0 offset 3395
at
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$anonfun$apply$mcV$sp$1$$anonfun$apply$2.apply(AbstractFetcherThread.scala
:207)
at
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$anonfun$apply$mcV$sp$1$$anonfun$apply$2.apply(AbstractFetcherThread.scala
:172)
at scala.Option.foreach(Option.scala:257)
at
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$anonfun$apply$mcV$sp$1.apply(AbstractFetcherThread.scala:172)
at
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$anonfun$apply$mcV$sp$1.apply(AbstractFetcherThread.scala:169)
at
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2.apply$mcV$sp(AbstractFetcherThread.scala:169)
at
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2.apply(AbstractFetcherThread.scala:169)
at
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2.apply(AbstractFetcherThread.scala:169)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
at
kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:167)
at
kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:114)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82)
Caused by: kafka.common.UnexpectedAppendOffsetException: Unexpected offset in
append to topic.a-0. First offset 3389
is less than the next offset 3395. First 10 offsets in append: List(3389, 3390,
3391, 3392, 3393, 3394, 3395, 3396, 3397, 3398), last offset in append:
4945. Log start offset = 3353
at kafka.log.Log$$anonfun$append$2.apply(Log.scala:825)
at kafka.log.Log$$anonfun$append$2.apply(Log.scala:752)
at kafka.log.Log.maybeHandleIOException(Log.scala:1837)
at kafka.log.Log.append(Log.scala:752)
at kafka.log.Log.appendAsFollower(Log.scala:733)
at
kafka.cluster.Partition$$anonfun$doAppendRecordsToFollowerOrFutureReplica$1.apply(Partition.scala:589)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:257)
at
kafka.cluster.Partition.doAppendRecordsToFollowerOrFutureReplica(Partition.scala:576)
at
kafka.cluster.Partition.appendRecordsToFollowerOrFutureReplica(Partition.scala:596)
at
kafka.server.ReplicaFetcherThread.processPartitionData(ReplicaFetcherThread.scala:129)
at
kafka.server.ReplicaFetcherThread.processPartitionData(ReplicaFetcherThread.scala:43)
at
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$anonfun$apply$mcV$sp$1$$anonfun$apply$2.apply(AbstractFetcherThread.scala
:186)
... 13 more
{code}
> FetcherThread stops processing after "Error processing data for partition"
> --------------------------------------------------------------------------
>
> Key: KAFKA-7635
> URL: https://issues.apache.org/jira/browse/KAFKA-7635
> Project: Kafka
> Issue Type: Bug
> Components: replication
> Affects Versions: 2.0.0
> Reporter: Steven Aerts
> Priority: Major
> Attachments: stacktraces.txt
>
>
> After disabling unclean leader leader again after recovery of a situation
> where we enabled unclean leader due to a split brain in zookeeper, we saw
> that some of our brokers stopped replicating their partitions.
> Digging into the logs, we saw that the replica thread was stopped because one
> partition had a failure which threw a [{{Error processing data for
> partition}}
> exception|https://github.com/apache/kafka/blob/2.0.0/core/src/main/scala/kafka/server/AbstractFetcherThread.scala#L207].
> But the broker kept running and serving the partitions from which it was
> leader.
> We saw three different types of exceptions triggering this (example
> stacktraces attached):
> * {{kafka.common.UnexpectedAppendOffsetException}}
> * {{Trying to roll a new log segment for topic partition partition-b-97 with
> start offset 1388 while it already exists.}}
> * {{Kafka scheduler is not running.}}
> We think there are two acceptable ways for the kafka broker to handle this:
> * Mark those partitions as a partition with error and handle them
> accordingly. As is done [when a {{CorruptRecordException}} or
> {{KafkaStorageException}}|https://github.com/apache/kafka/blob/2.0.0/core/src/main/scala/kafka/server/AbstractFetcherThread.scala#L196]
> is thrown.
> * Exit the broker as is done [when log truncation is not
> allowed|https://github.com/apache/kafka/blob/2.0.0/core/src/main/scala/kafka/server/ReplicaFetcherThread.scala#L189].
>
> Maybe even a combination of both. Our probably naive idea is that for the
> first two types the first strategy would be the best, but for the last type,
> it is probably better to re-throw a {{FatalExitError}} and exit the broker.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)