[ 
https://issues.apache.org/jira/browse/KAFKA-19194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17947117#comment-17947117
 ] 

Azhar Ahmed commented on KAFKA-19194:
-------------------------------------

This seems like the expected behavior to me. Just as we can't allow the folder 
location of a database to be corrupted, we should similarly prevent corruption 
of the Kafka directory.

> IOException thrown while starting Kafka broker if a file is present inside 
> kafka data directory.
> ------------------------------------------------------------------------------------------------
>
>                 Key: KAFKA-19194
>                 URL: https://issues.apache.org/jira/browse/KAFKA-19194
>             Project: Kafka
>          Issue Type: Bug
>    Affects Versions: 2.3.0
>         Environment: Red Hat Enterprise Linux 8.10
>            Reporter: Srinath PS
>            Priority: Major
>
> IOException is thrown when Kafka broker is started. In Kafka data directory 1 
> extra file is present. 
> {code:java}
> 2025-02-18 11:31:12,469 UTC INFO [Log partition=fm.views-0, 
> dir=/opt/sonus/ems/kafka/data/kafka] Loading producer state till offset 3 
> with message format version 2 (kafka.log.Log)
> 2025-02-18 11:31:12,485 UTC INFO [ProducerStateManager partition=fm.views-0] 
> Loading producer state from snapshot file 
> '/opt/sonus/ems/kafka/data/kafka/fm.views-0/00000000000000000003.snapshot' 
> (kafka.log.ProducerStateManager)
> 2025-02-18 11:31:12,546 UTC INFO [ProducerStateManager partition=fm.views-0] 
> Writing producer snapshot at offset 10 (kafka.log.ProducerStateManager)
> 2025-02-18 11:31:12,572 UTC INFO [Log partition=fm.views-0, 
> dir=/opt/sonus/ems/kafka/data/kafka] Loading producer state till offset 10 
> with message format version 2 (kafka.log.Log)
> 2025-02-18 11:31:12,573 UTC INFO [ProducerStateManager partition=fm.views-0] 
> Loading producer state from snapshot file 
> '/opt/sonus/ems/kafka/data/kafka/fm.views-0/00000000000000000010.snapshot' 
> (kafka.log.ProducerStateManager)
> 2025-02-18 11:31:12,575 UTC INFO [Log partition=fm.views-0, 
> dir=/opt/sonus/ems/kafka/data/kafka] Completed load of log with 2 segments, 
> log start offset (merged: 0, local: 0) and log end offset 10 in 137 ms 
> (kafka.log.Log)
> 2025-02-18 11:31:12,583 UTC INFO [Log partition=fm.views-0, 
> dir=/opt/sonus/ems/kafka/data/kafka] Loading producer state till offset 10 
> with message format version 2 (kafka.log.Log)
> 2025-02-18 11:31:12,593 UTC INFO Completed load of log with 2 segments 
> containing 2 local segments and 0 tiered segments, tier start offset 0, first 
> untiered offset 0, local start offset 0, log end offset 10 
> (kafka.log.MergedLog)
> 2025-02-18 11:31:12,607 UTC ERROR Error while loading log dir 
> /opt/sonus/ems/kafka/data/kafka (kafka.log.LogManager)
> java.io.IOException: Could not read file 
> /opt/sonus/ems/kafka/data/kafka/__consumer_offsets-0/migrate.log
>     at 
> kafka.log.Log.$anonfun$removeTempFilesAndCollectSwapFiles$3(Log.scala:370)
>     at 
> scala.collection.TraversableLike$WithFilter.$anonfun$foreach$1(TraversableLike.scala:792)
>     at 
> scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
>     at 
> scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
>     at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
>     at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:791)
>     at kafka.log.Log.removeTempFilesAndCollectSwapFiles(Log.scala:368)
>     at kafka.log.Log.loadSegments(Log.scala:523)
>     at kafka.log.Log.<init>(Log.scala:291)
>     at kafka.log.MergedLog$.apply(MergedLog.scala:603)
>     at kafka.log.LogManager.loadLog(LogManager.scala:278)
>     at kafka.log.LogManager.$anonfun$loadLogs$12(LogManager.scala:348)
>     at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:65)
>     at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>     at java.lang.Thread.run(Thread.java:750)
> 2025-02-18 11:31:12,613 UTC INFO [Log 
> partition=queuing.ems.pm.profile_data_jmx-0, 
> dir=/opt/sonus/ems/kafka/data/kafka] Recovering unflushed segment 0 
> (kafka.log.Log)
> 2025-02-18 11:31:12,614 UTC INFO [Log 
> partition=queuing.ems.pm.profile_data_jmx-0, 
> dir=/opt/sonus/ems/kafka/data/kafka] Loading producer state till offset 0 
> with message format version 2 (kafka.log.Log)
> 2025-02-18 11:31:12,634 UTC INFO [ProducerStateManager 
> partition=queuing.ems.pm.profile_data_jmx-0] Writing producer snapshot at 
> offset 20 (kafka.log.ProducerStateManager) {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to