[ 
https://issues.apache.org/jira/browse/KAFKA-6266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16268257#comment-16268257
 ] 

Michal Klempa commented on KAFKA-6266:
--------------------------------------

I ran into the situation, where my log segment is 0 in byt length:
{code}
__consumer_offsets-35]# ls -l
total 0
-rw-r--r--. 1 kafka kafka 10485760 nov 27 20:20 00000000000000000183.index
-rw-r--r--. 1 kafka kafka        0 nov  1 12:39 00000000000000000183.log
-rw-r--r--. 1 kafka kafka 10485756 nov 27 20:20 00000000000000000183.timeindex
{code}
The base offset of the segment is 183 (as the file name says).
Debugging the LogCleanerManager shows me, that:
1. LogCleanerManager.grabFilthiestCompactedLog if called
2. then, line 99 (Kafka 0.10.0.2 version tag in git):
{code}
  val (firstDirtyOffset, firstUncleanableDirtyOffset) = 
LogCleanerManager.cleanableOffsets(log, topicPartition,
            lastClean, now)
{code}
call the cleanableOffsets method.
3. in cleanableOffsets, i see the
{code}
      val offset = lastCleanOffset.getOrElse(logStartOffset)
{code}
value 138 is returned as the last cleaned offset.
4. as this value is lower than the base offset, the logStartOffset is set as 
the firstDirtyOffset.
5. then, the
    val dirtyNonActiveSegments = log.logSegments(firstDirtyOffset, 
log.activeSegment.baseOffset)
tries to select segments between 183 and 183, which returns 0 segments 
(shouldn't it return at least the active segment?).
6. even if it returned something, this segment is active, so it is not cleaned

and the process repeats on timely manner.
The error would probably go aways if I was able to produce messages to Kafka. 
But I am not able to produce any message, no new offset get commited. Also 
replication stopeed working in my case (although I have min.insync = 1 so this 
would probably not block producing).
Consumer offsets are under-replicated and this is not going to be fixed 
automatically.

This situation happened after several downtimes. Is it somehow possible to get 
out of this?


> Kafka 1.0.0 : Repeated occurrence of WARN Resetting first dirty offset of 
> __consumer_offsets-xx to log start offset 203569 since the checkpointed 
> offset 120955 is invalid. (kafka.log.LogCleanerManager$)
> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: KAFKA-6266
>                 URL: https://issues.apache.org/jira/browse/KAFKA-6266
>             Project: Kafka
>          Issue Type: Bug
>          Components: offset manager
>    Affects Versions: 1.0.0
>         Environment: CentOS 7, Apache kafka_2.12-1.0.0
>            Reporter: VinayKumar
>
> I upgraded Kafka from 0.10.2.1 to 1.0.0 version. From then, I see the below 
> warnings in the log.
> I'm seeing these continuously in the log, and want these to be fixed- so that 
> they wont repeat. Can someone please help me in fixing the below warnings.
> WARN Resetting first dirty offset of __consumer_offsets-17 to log start 
> offset 3346 since the checkpointed offset 3332 is invalid. 
> (kafka.log.LogCleanerManager$)
> WARN Resetting first dirty offset of __consumer_offsets-23 to log start 
> offset 4 since the checkpointed offset 1 is invalid. 
> (kafka.log.LogCleanerManager$)
> WARN Resetting first dirty offset of __consumer_offsets-19 to log start 
> offset 203569 since the checkpointed offset 120955 is invalid. 
> (kafka.log.LogCleanerManager$)
> WARN Resetting first dirty offset of __consumer_offsets-35 to log start 
> offset 16957 since the checkpointed offset 7 is invalid. 
> (kafka.log.LogCleanerManager$)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to