Thanks, Jiangjie,  

Yes, we had reduced the segmetn.index.bytes to 1K in order to maintain more 
frequent offset index, which was required for ability to fetch start and end 
offsets for a given span of time say 15 mins. Ideally changing only the 
index.interval.bytes to 1K should have been sufficient config for that but we 
found that wouldn’t give expected results. We are using simple consumer offset 
API to fetch the offsets for given timestamps before starting to consume 
messages for that period, and get the same offset every time, even though data 
was produced during the time.

-Zakee



> On Jun 20, 2015, at 4:23 PM, Jiangjie Qin <[email protected]> wrote:
> 
> It seems that your log.index.size.max.bytes was 1K and probably was too
> small. This will cause your index file to reach its upper limit before
> fully index the log segment.
> 
> Jiangjie (Becket) Qin
> 
> On 6/18/15, 4:52 PM, "Zakee" <[email protected]> wrote:
> 
>> Any ideas on why one of the brokers which was down for a day, fails to
>> restart with exception as below? The 10-node cluster has been up and
>> running fine for quite a few weeks.
>> 
>> [2015-06-18 16:44:25,746] ERROR [app=broker] [main] There was an error in
>> one of the threads during logs loading:
>> java.lang.IllegalArgumentException: requirement failed: Attempt to append
>> to a full index (size = 128). (kafka.log.LogManager)
>> [2015-06-18 16:44:25,747] FATAL [app=broker] [main] [Kafka Server 13],
>> Fatal error during KafkaServer startup. Prepare to shutdown
>> (kafka.server.KafkaServer)
>> java.lang.IllegalArgumentException: requirement failed: Attempt to append
>> to a full index (size = 128).
>>       at scala.Predef$.require(Predef.scala:233)
>>       at 
>> kafka.log.OffsetIndex$$anonfun$append$1.apply$mcV$sp(OffsetIndex.scala:198
>> )
>>       at 
>> kafka.log.OffsetIndex$$anonfun$append$1.apply(OffsetIndex.scala:197)
>>       at 
>> kafka.log.OffsetIndex$$anonfun$append$1.apply(OffsetIndex.scala:197)
>>       at kafka.utils.Utils$.inLock(Utils.scala:535)
>>       at kafka.log.OffsetIndex.append(OffsetIndex.scala:197)
>>       at kafka.log.LogSegment.recover(LogSegment.scala:187)
>>       at kafka.log.Log.recoverLog(Log.scala:205)
>>       at kafka.log.Log.loadSegments(Log.scala:177)
>>       at kafka.log.Log.<init>(Log.scala:67)
>>       at 
>> kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$3$$anonfun$apply$7$$anon
>> fun$apply$1.apply$mcV$sp(LogManager.scala:142)
>>       at kafka.utils.Utils$$anon$1.run(Utils.scala:54)
>>       at 
>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
>>       at 
>> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>>       at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>>       at 
>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.
>> java:895)
>>       at 
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java
>> :918)
>>       at java.lang.Thread.run(Thread.java:662)
>> 
>> 
>> Thanks
>> Zakee
>> 
>> 
>> 
>> ____________________________________________________________
>> Old School Yearbook Pics
>> View Class Yearbooks Online Free. Search by School & Year. Look Now!
>> http://thirdpartyoffers.netzero.net/TGL3231/558359b1bf13159b1361dst03vuc
> 
> ____________________________________________________________
> Old School Yearbook Pics
> View Class Yearbooks Online Free. Search by School & Year. Look Now!
> http://thirdpartyoffers.netzero.net/TGL3255/5585fbf0f3ee17bf0407emp13duc

Reply via email to