[ 
https://issues.apache.org/jira/browse/KAFKA-7656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16951081#comment-16951081
 ] 

Viktor Loosz commented on KAFKA-7656:
-------------------------------------

Hi, sorry for the late reply. 

The problem seems to be solved (for now) as the errors stopped and there are no 
failed fetch requests. 

To answer your previous question, segment 0 had different amount of records on 
the 3 brokers (rf = 3). The leader and one of the followers had 19 records 
while the other follower had only 13. That is why we though it is one of the 
replicas and not a consumer. Kafkacat showed the partition was in sync with all 
replicas. 
{noformat}
    partition 11, leader 60, replicas: 60,26,27, isrs: 27,60,26 {noformat}
Since the segments were rotated everything seems to be in order.
{noformat}
# FOLLOWER 1
$ kafka-dump-log.sh --files 00000000000000000000.log  | wc -l
33
# FOLLOWER 2 
$ kafka-dump-log.sh --files 00000000000000000000.log | wc -l
33
# LEADER
$ kafka-dump-log.sh --files 00000000000000000000.log | wc -l
33{noformat}
Please let me know if I can help in any ways to get around this issues even if 
we are not affected anymore.

Thanks,

Viktor

 

 

> ReplicaManager fetch fails on leader due to long/integer overflow
> -----------------------------------------------------------------
>
>                 Key: KAFKA-7656
>                 URL: https://issues.apache.org/jira/browse/KAFKA-7656
>             Project: Kafka
>          Issue Type: Bug
>          Components: core
>    Affects Versions: 2.0.1
>         Environment: Linux 3.10.0-693.el7.x86_64 #1 SMP Thu Jul 6 19:56:57 
> EDT 2017 x86_64 x86_64 x86_64 GNU/Linux
>            Reporter: Patrick Haas
>            Assignee: Jose Armando Garcia Sancio
>            Priority: Major
>
> (Note: From 2.0.1-cp1 from confluent distribution)
> {{[2018-11-19 21:13:13,687] ERROR [ReplicaManager broker=103] Error 
> processing fetch operation on partition __consumer_offsets-20, offset 0 
> (kafka.server.ReplicaManager)}}
> {{java.lang.IllegalArgumentException: Invalid max size -2147483648 for log 
> read from segment FileRecords(file= 
> /prod/kafka/data/kafka-logs/__consumer_offsets-20/00000000000000000000.log, 
> start=0, end=2147483647)}}
> {{ at kafka.log.LogSegment.read(LogSegment.scala:274)}}
> {{ at kafka.log.Log$$anonfun$read$2.apply(Log.scala:1159)}}
> {{ at kafka.log.Log$$anonfun$read$2.apply(Log.scala:1114)}}
> {{ at kafka.log.Log.maybeHandleIOException(Log.scala:1842)}}
> {{ at kafka.log.Log.read(Log.scala:1114)}}
> {{ at 
> kafka.server.ReplicaManager.kafka$server$ReplicaManager$$read$1(ReplicaManager.scala:912)}}
> {{ at 
> kafka.server.ReplicaManager$$anonfun$readFromLocalLog$1.apply(ReplicaManager.scala:974)}}
> {{ at 
> kafka.server.ReplicaManager$$anonfun$readFromLocalLog$1.apply(ReplicaManager.scala:973)}}
> {{ at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)}}
> {{ at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)}}
> {{ at kafka.server.ReplicaManager.readFromLocalLog(ReplicaManager.scala:973)}}
> {{ at kafka.server.ReplicaManager.readFromLog$1(ReplicaManager.scala:802)}}
> {{ at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:815)}}
> {{ at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:685)}}
> {{ at kafka.server.KafkaApis.handle(KafkaApis.scala:114)}}
> {{ at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:69)}}
> {{ at java.lang.Thread.run(Thread.java:748)}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to